Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........


How Molecules Escape from Cell's Nucleus: Key Advance in Using Microscopy to Reveal Secrets of Living Cells

The results, published in the September 15 online edition of Nature, mark a major advance in the use of microscopes for scientific investigation (microscopy). The findings could lead to treatments for disorders such as myotonic dystrophy in which messenger RNA gets stuck inside the nucleus of cells.

 Real-Time mRNA Export: Messenger RNA molecules (green structures) passing through the nuclear pore (red) from the nucleus to the cytoplasm.


Robert Singer, Ph.D., professor and co-chair of anatomy and structural biology, professor of cell biology and neuroscience and co-director of the Gruss-Lipper Biophotonics Center at Einstein, is the study's senior author. His co-author, David Grünwald, is at the Kavli Institute of Nanoscience at Delft University of Technology, The Netherlands. Prior to their work, the limit of microscopy resolution was 200 nanometers (billionths of a meter), meaning that molecules closer than that could not be distinguished as separate entities in living cells. In this paper, the researchers improved that resolution limit by 10 fold, successfully differentiating molecules only 20 nanometers apart.

Protein synthesis is arguably the most important of all cellular processes. The instructions for making proteins are encoded in the Deoxyribonucleic acid (DNA) of genes, which reside on chromosomes in the nucleus of a cell. In protein synthesis, DNA instructions of a gene are transcribed, or copied, onto messenger RNA; these molecules of messenger RNA must then travel out of the nucleus and into the cytoplasm, where amino acids are linked together to form the specified proteins.

Molecules shuttling between the nucleus and cytoplasm are known to pass through protein complexes called nuclear pores. After tagging messenger RNA molecules with a yellow fluorescent protein (which appears green in the accompanying image) and tagging the nuclear pore with a red fluorescent protein, the researchers used high-speed cameras to film messenger RNA molecules as they traveled across the pores. The Nature paper reveals the dynamic and surprising mechanism by which nuclear pores "translocate" messenger RNA molecules from the nucleus into the cytoplasm: this is the first time their pore transport has been seen in living cells in real time.

"Up until now, we'd really had no idea how messenger RNA travels through nuclear pores," said Dr. Singer. "Researchers intuitively thought that the squeezing of these molecules through a narrow channel such as the nuclear pore would be the slow part of the translocation process. But to our surprise, we observed that messenger RNA molecules pass rapidly through the nuclear pores, and that the slow events were docking on the nuclear side and then waiting for release into the cytoplasm."

More specifically, Dr. Singer found that single messenger RNA molecules arrive at the nuclear pore and wait for 80 milliseconds (80 thousandths of a second) to enter; they then pass through the pore breathtakingly fast -- in just 5 milliseconds; finally, the molecules wait on the other side of the pore for another 80 milliseconds before being released into the cytoplasm.

The waiting periods observed in this study, and the observation that 10 percent of messenger RNA molecules sit for seconds at nuclear pores without gaining entry, suggest that messenger RNA could be screened for quality at this point.

"Researchers have speculated that messenger RNA molecules that are defective in some way, perhaps because the genes they're derived from are mutated, may be inspected and destroyed before getting into the cytoplasm or a short time later, and the question has been, 'Where might that surveillance be happening?'," said Dr. Singer. "So we're wondering if those messenger RNA molecules that couldn't get through the nuclear pores were subjected to a quality control mechanism that didn't give them a clean bill of health for entry."

In previous research, Dr. Singer studied myotonic dystrophy, a severe inherited disorder marked by wasting of the muscles and caused by a mutation involving repeated DNA sequences of three nucleotides. Dr. Singer found that in the cells of people with myotonic dystrophy, messenger RNA gets stuck in the nucleus and can't enter the cytoplasm. "By understanding how messenger RNA exits the nucleus, we may be able to develop treatments for myotonic dystrophy and other disorders in which messenger RNA transport is blocked," he said.

The paper, "In Vivo Imaging of Labelled Endogenous β-actin mRNA during Nucleocytoplasmic Transport," was published in the September 15 online edition of Nature.

From sciencedaily.com

Genetic 'Light Switches' Control Muscle Movement

Using light-sensitive proteins from a single-celled alga and a tiny LED "cuff" placed on a nerve, researchers have triggered the leg muscles of mice to contract in response to millisecond pulses of light.

Light movement: This image shows a cross-section of a mouse sciatic nerve genetically engineered to produce a light-sensitive protein (shown in green). Stanford researchers used this protein to trigger muscle movements in the animal’s leg.


The study, published in the journal Nature Medicine, marks the first use of the nascent technology known as optogenetics to control muscle movements. Developed by study coauthor Karl Deisseroth, an associate professor of bioengineering and of psychiatry and behavioral science at Stanford University, optogenetics makes it possible to stimulate neurons with light by inserting the gene for a protein called channelrhodopsin-2, from a green alga. When a modified neuron is exposed to blue light, the protein initiates electrical activity inside the cell that then spreads from neuron to neuron. By controlling which neurons make the protein, as well as which cells are exposed to light, scientists can control neural activity in living animals with unprecedented precision. The paper's other senior author, Scott Delp, a professor of bioengineering, mechanical engineering, and orthopedic surgery at Stanford, says that the optical control method provides "fantastic advantages over electrical stimulation" for his study of muscles and the biomechanics of human movement. 

Members of Deisseroth's lab had engineered mice to produce channelrhodopsin-2 in both the central and the peripheral nervous systems. Michael Llewellyn, a former graduate student in Delp's lab, developed a tiny, implantable LED cuff to apply light to the nerve evenly. He placed the cuff on the sciatic nerves of anesthetized mice and triggered millisecond pulses of light. This caused the leg muscles of the mice to contract. When Llewellyn compared the muscle contractions stimulated by light to those generated using a similar electrical cuff, he found that the light-triggered contractions were much more similar to normal muscle activity.
Muscles are made up of two different fibers: small, slow, fatigue-resistant fibers that are typically used for tasks that require fine motor control over longer periods, and larger, faster fibers that can produce higher forces but are more fatigue-prone. In the body, the small, slow fibers are activated first, with the large, fast fibers reserved for quick bursts of power or speed. When muscles are stimulated with electrical pulses, the fast fibers activate first. With the optogenetic switch, however, the fibers were recruited in the normal, physiological order: slow fibers first, fast fibers second. By altering the intensity of the light, Llewellyn found that he could even trigger only the slow fibers--a feat not possible with electrical stimulation. 

In the near term, Delp says, the technology will improve the studies that his lab and others do on muscle activity in animal models of stroke, palsies, ALS, and other neuromuscular disorders. He also hopes that in time--a long time, he concedes--such optical switches could be used to help patients with physical disabilities caused by nerve damage such as stroke, spinal cord injury, or cerebral palsy. One possibility, he says, would be to use optical stimulation in place of functional electrical stimulation (FES), in which electrical current is applied to specific nerves or muscles to trigger muscle contractions. The U.S. Food and Drug Administration has already approved FES devices that can restore hand function and bladder control to some paralyzed people. However, FES can quickly lead to muscle fatigue. Delp hopes that, particularly with grasping functions, using optical stimulation might result in better fatigue resistance and perhaps finer muscle control. 

"This is a brilliant study, really beautiful science," says Robert Kirsch, a bioengineer at Case Western Reserve University and associate director of the Cleveland Functional Electrical Stimulation Center; he was not involved in the research. "I think there are many [clinical implications]," he says, although, like Delp and Llewellyn, he notes that many high hurdles must be cleared--not least of which is developing a safe, effective way to deliver the channelrhodopsin-2 gene to nerve cells in humans. Otherwise, Kirsch says, "my one objection would be their implication that they've solved the fatigue problem with FES. I'm pretty sure that hasn't happened." Instead, Kirsch believes that most of the fatigue seen in FES patients is due to muscle atrophy and weakness that develop in the chronically paralyzed.

C.J. Heckman, a professor of physiology at Northwestern University's Feinberg School of Medicine, agrees: "It is true that a lot of the fatigue seen in FES patients is due to chronic muscle atrophy." But, he says, "if you could stimulate the muscles in the correct recruitment order repeatedly over time, you could potentially recover a lot of muscle function." This could help paralysis patients preserve their slow muscle fibers, "which would be a huge deal," Heckman says. This is because those fibers do a huge percentage of the work muscles do--everything from maintaining posture to typing on a keyboard. 

Delp also thinks that stimulation-based exercise could be an important application for optical muscle control, as could helping wheelchair-bound people stand to reach for books or plates in a cabinet. "I'm not super-high on controlling locomotion"--that is, walking--"with either electrical or optical stimulation, though," Delp says. "It's an incredibly complicated command-and-control scheme that's really hard to coordinate."

In the meantime, Delp and Llewellyn have begun an effort to use a different light-sensitive protein, halorhodopsin, to inhibit motor nerves in mice, with the idea of treating or even curing muscle spasticity, often a serious side effect to brain or spinal injury. Current treatments are far from ideal; doctors may inject botulinum toxin into the affected muscles every few months to paralyze them, use oral medications such as Valium that affect the whole body instead of just the affected muscle, or, in the most severe cases, cut the nerves or tendons of the spastic muscle--a permanent treatment that leaves the patient with no control over that muscle. Delp hopes that genetically engineering the nerves with halorhodopsin might enable people to use light to reversibly relax muscles affected with spasticity. 

"I think that's a great idea for treating spasticity," says Jerry Silver, a neuroscientist at Case Western. There may be some difficulties along the way, though, he says. Working with Case colleagues, Silver has started a company called LucCell to develop clinical applications of optogenetics. In one company project, scientists are trying to use halorhodopsin and other inhibitory opsins in animal models to turn off the muscle that controls the bladder sphincter; their ultimate goal is to restore bladder function to paralyzed people. Though they have seen some physiological changes in how the sphincter muscle behaves, they haven't been able to get it to relax enough. "We're learning it's easier to turn things on than turn things off," he says. Still, the team is persisting, looking for better ways to deliver the gene to nerve cells and for ways to increase production of the protein on the cell's surfaces. 

"It all depends on the ability to get the transgene in the right place in the person's genome without causing problems," agrees Llewellyn. "It's the main obstacle."

By Erika Jonietz
From Technology Review

Measuring Atomic Memory with Nano Precision

The events that take place inside of atoms occur at speeds that are normally much too fast to capture. Now researchers at IBM's Almaden Research Center have developed a technique that lets them watch this atomic action with unprecedented resolution. 

The researchers used the technique to flip the orientation of an atom's spin, a fundamental quantum property, and then to measure how long the atom "remembered" this state before returning to its natural spin state. This is a first step toward developing a kind of computer memory that works on the atomic scale, and the technique could also be used by materials scientists to perform the basic research necessary in making more efficient organic solar materials.

Memory machine: IBM researcher Sebastian Loth operates the scanning tunneling microscope that his team used to measure how long a single atom can store information. 


Influencing and measuring an atom's spin state is one way to make a quantum bit, or qubit, which can simultaneously serve as both a 1 and a 0 in a quantum computer. It is possible to take a static measurement of an atom's spin, but until now it hasn't been possible to watch an atom's spin change over time. 

Researchers led by Don Eigler and Andreas Heinrich at IBM's lab in San Jose, California, were able to watch atomic spins flip, or "relax," over time using a modified scanning tunneling microscope, or STM--an instrument IBM researchers invented in 1981. They captured images of the atom's state every five nanoseconds--a million times faster than before.

The IBM researchers found that a single iron atom can store magnetic information in the form of spin for about one nanosecond. However, when the iron atom is near a copper atom, its quantum memory is prolonged, so that it takes about 200 nanoseconds for the spin to relax. The results were published last week in the journal Science.

"The information decays in 200 nanoseconds, but that's a lot of time," says Sebastian Loth, a member of the research team. "Current processors do several hundred cycles of calculations in that time."

When the tip of an STM is brought very close to a surface, electrical current can flow between atoms on the surface and its tip. By moving over a surface, the microscope can generate a picture of it. And by analyzing the flow of current, it's possible to learn about the atom's magnetic state, including its spin.

To improve the time resolution of the STM, the researchers modified the tip so that it not only measured electrical current but also supplied it. They fed current to an atom and then measured its state after a fixed period of time. For each such time period, they took 100,000 measurements. They varied the time between pulses and measurements, repeating the process again and again. The images from each measurement were combined as frames in a video. By putting these frames together, the researchers created a moving picture of the spin state of the atom, with a frame taken every five nanoseconds or so.

Loth says the IBM researchers hope to use the fast STM technique for two basic areas of research. First, they'll continue using it to determine whether different combinations of atoms can store quantum information for longer. Second, by using a stream of photons instead of a stream of electrons as the pulse signal, says Loth, the researchers hope to gain a better understanding of how some organic molecules convert light into electrical energy. This could lead to better solar cells.

Systems like IBM's for flipping and measuring atomic spins could potentially be part of a future quantum computer, says Alán Aspuru-Guzik, professor of chemistry and chemical biology at Harvard University. Altering and measuring the spin of atoms, and being able to predict how atoms will behave, is an important step towards this goal, he says. Most of the devices that have been made so far, he says, are more like "quantum toys" than computers. But the field is moving steadily forward, he says. "Every week someone demonstrates manipulating the qubit a little better."

By Katherine Bourzac 
From Technology Review

Universal, Primordial Magnetic Fields Discovered in Deep Space

Caltech physicist Shin'ichiro Ando and Alexander Kusenko, a professor of physics and astronomy at UCLA, report the discovery in a paper to be published in an upcoming issue of Astrophysical Journal Letters.

Ando and Kusenko studied images of the most powerful objects in the universe -- supermassive black holes that emit high-energy radiation as they devour stars in distant galaxies -- obtained by NASA's Fermi Gamma-ray Space Telescope.

 An artist's conception of an "active galactic nucleus." In some galaxies, the nucleus, or central core, produces more radiation than the entire rest of the galaxy.

"We found the signs of primordial magnetic fields in deep space between galaxies," Ando said.
Physicists have hypothesized for many years that a universal magnetic field should permeate deep space between galaxies, but there was no way to observe it or measure it until now.
The physicists produced a composite image of 170 giant black holes and discovered that the images were not as sharp as expected.

"Because space is filled with background radiation left over from the Big Bang, as well as emitted from galaxies, high-energy photons emitted by a distant source can interact with the background photons and convert into electron-positron pairs, which interact in their turn and convert back into a group of photons somewhat later," said Kusenko, who is also a senior scientist at the University of Tokyo's Institute for Physics and Mathematics of the Universe.

"While this process by itself does not blur the image significantly, even a small magnetic field along the way can deflect the electrons and positrons, making the image fuzzy," he said.

From such blurred images, the researchers found that the average magnetic field had a "femto-Gauss" strength, just one-quadrillionth of the Earth's magnetic field. The universal magnetic fields may have formed in the early universe shortly after the Big Bang, long before stars and galaxies formed, Ando and Kusenko said.
The research was funded by NASA, the U.S. Department of Energy and Japan's Society for the Promotion of Science.

From sciencedaily.com

Quantum Computing Closer Than Ever: Scientists Using Lasers to Cool and Control Molecules

Now a team of Yale physicists has used lasers for a completely different purpose, employing them to cool molecules down to temperatures near what's known as absolute zero, about -460 degrees Fahrenheit. Their new method for laser cooling, described in the online edition of the journal Nature, is a significant step toward the ultimate goal of using individual molecules as information bits in quantum computing.

Currently, scientists use either individual atoms or "artificial atoms" as qubits, or quantum bits, in their efforts to develop quantum processors. But individual atoms don't communicate as strongly with one another as is needed for qubits. On the other hand, artificial atoms -- which are actually circuit-like devices made up of billions of atoms that are designed to behave like a single atom -- communicate strongly with one another, but are so large they tend to pick up interference from the outside world. Molecules, however, could provide an ideal middle ground.

 A new method for laser cooling could help pave the way for using individual molecules as information bits in quantum computing.

"It's a kind of Goldilocks problem," said Yale physicist David DeMille, who led the research. "Artificial atoms may prove too big and individual atoms may prove too small, but molecules made up of a few different atoms could be just right."

In order to use molecules as qubits, physicists first have to be able to control and manipulate them -- an extremely difficult feat, as molecules generally cannot be picked up or moved without disturbing their quantum properties. In addition, even at room temperature molecules have a lot of kinetic energy, which causes them to move, rotate and vibrate.

To overcome the problem, the Yale team pushed the molecules using the subtle kick delivered by a steady stream of photons, or particles of light, emitted by a laser. Using laser beams to hit the molecules from opposite directions, they were able to reduce the random velocities of the molecules. The technique is known as laser cooling because temperature is a direct measurement of the velocities in the motion of a group of molecules. Reducing the molecules' motions to almost nothing is equivalent to driving their temperatures to virtually absolute zero.

While scientists had previously been able to cool individual atoms using lasers, the discovery by the Yale team represents the first time that lasers have just as successfully cooled molecules, which present unique challenges because of their more complex structures.

The team used the molecule strontium monofluoride in their experiments, but DeMille believes the technique will also prove successful with other molecules. Beyond quantum computing, laser cooling molecules has potential applications in chemistry, where near absolute zero temperatures could induce currently inaccessible reactions via a quantum mechanical process known as "quantum tunneling." DeMille also hopes to use laser cooling to study particle physics, where precise measurements of molecular structure could give clues as to the possible existence of exotic, as of yet undiscovered particles.

"Laser cooling of atoms has created a true scientific revolution. It is now used in areas ranging from basic science such as Bose-Einstein condensation, all the way to devices with real-world impacts such as atomic clocks and navigation instruments," DeMille said. "The extension of this technique to molecules promises to open an exciting new range of scientific and technological applications."

Other authors of the paper include Edward Shuman and John Barry (both of Yale University).

From sciencedaily.com

Brain Coprocessors

The last few decades have seen a surge of invention of technologies that enable the observation or perturbation of information in the brain. Functional MRI, which measures blood flow changes associated with brain activity, is being explored for purposes as diverse as lie detection, prediction of human decision making, and assessment of language recovery after stroke. Implanted electrical stimulators, which enable control of neural circuit activity, are borne by hundreds of thousands of people to treat conditions such as deafness, Parkinson's disease, and obsessive-compulsive disorder. And new methods, such as the use of light to activate or silence specific neurons in the brain, are being widely utilized by researchers to reveal insights into how to control neural circuits to achieve therapeutically useful changes in brain dynamics. We are entering a neurotechnology renaissance, in which the toolbox for understanding the brain and engineering its functions is expanding in both scope and power at an unprecedented rate.



This toolbox has grown to the point where the strategic utilization of multiple neurotechnologies in conjunction with one another, as a system, may yield fundamental new capabilities, both scientific and clinical, beyond what they can offer alone. For example, consider a system that reads out activity from a brain circuit, computes a strategy for controlling the circuit so it enters a desired state or performs a specific computation, and then delivers information into the brain to achieve this control strategy. Such a system would enable brain computations to be guided by predefined goals set by the patient or clinician, or adaptively steered in response to the circumstances of the patient's environment or the instantaneous state of the patient's brain. 

Some examples of this kind of "brain coprocessor" technology are under active development, such as systems that perturb the epileptic brain when a seizure is electrically observed, and prosthetics for amputees that record nerves to control artificial limbs and stimulate nerves to provide sensory feedback. Looking down the line, such system architectures might be capable of very advanced functions--providing just-in-time information to the brain of a patient with dementia to augment cognition, or sculpting the risk-taking profile of an addiction patient in the presence of stimuli that prompt cravings.

Given the ever-increasing number of brain readout and control technologies available, a generalized brain coprocessor architecture could be enabled by defining common interfaces governing how component technologies talk to one another, as well as an "operating system" that defines how the overall system works as a unified whole--analogous to the way personal computers govern the interaction of their component hard drives, memories, processors, and displays. Such a brain coprocessor platform could facilitate innovation by enabling neuroengineers to focus on neural prosthetics at an algorithmic level, much as a computer programmer can work on a computer at a conceptual level without having to plan the fate of every individual bit. In addition, if new technologies come along, e.g., a new kind of neural recording technology, they could be incorporated into a system, and in principle rapidly coupled to existing computation and perturbation methods, without requiring the heavy readaptation of those other components. 

Developing such brain coprocessor architectures would take some work--in particular, it would require technologies standardized enough, or perhaps open enough, to be interoperable in a variety of combinations. Nevertheless, much could be learned from developing relatively simple prototype systems. For example, recording technologies by themselves can report brain activity, but cannot fully attest to the causal contribution that the observed brain activity makes to a specific behavioral or clinical outcome; control technologies can input information into neural targets, but by themselves their outcomes might be difficult to interpret due to endogenous neural information and unobserved neural processing. These scientific issues can be disambiguated by rudimentary brain coprocessors, built with readily available off-the-shelf components, that use recording technologies to assess how a given neural circuit perturbation alters brain dynamics. Such explorations may begin to reveal principles governing how best to control a circuit--revealing the neural targets and control strategies that most efficaciously lead to a goal brain state or behavioral effect, and thus pointing the way to new therapeutic strategies. Miniature, implantable brain coprocessors might be able to support new kinds of personalized medicine, for example continuously adapting a neural control strategy to the goals, state, environment, and history of an individual patient--important powers, given the dynamic nature of many brain disorders.

In the future, the computational module of a brain coprocessor may be powerful enough to assist in high-level human cognition or complex decision making. Of course, the augmentation of human intelligence has been one of the key goals of computer engineers for well over half a century. Indeed, if we relax the definition of brain coprocessor just a bit, so as not to require direct physical access to the brain, many consumer technologies being developed today are converging upon brain coprocessor-like architectures. A large number of new technologies are attempting to discover information useful to a user and to deliver this information to the user in real time. Also, these discovery and delivery processes are increasingly shaped by the environment (e.g., location) and history (e.g., social interactions, searches) of the user. Thus we are seeing a departure from the classical view (as initially anticipated by early thinkers about human-machine symbiosis such as J. C. R. Licklider) in which computers receive goals from humans, perform defined computations, and then provide the results back to humans. 

Of course, giving machines the authority to serve as proactive human coprocessors, and allowing them to capture our attention with their computed priorities, has to be considered carefully, as anyone who has lost hours due to interruption by a slew of social-network updates or search-engine alerts can attest. How can we give the human brain access to increasingly proactive coprocessing technologies without losing sight of our overarching goals? One idea is to develop and deploy metrics that allow us to evaluate the IQ of a human plus a coprocessor, working together--evaluating the performance of collaborating natural and artificial intelligences in a broad battery of problem-solving contexts. After all, humans with Internet-based brain coprocessors (e.g., laptops running Web browsers) may be more distractible if the goals include long, focused writing tasks, but they may be better at synthesizing data broadly from disparate sources; a given brain coprocessor configuration may be good for some problems but bad for others. Thinking of emerging computational technologies as brain coprocessors forces us to think about them in terms of the impacts they have on the brain, positive and negative, and importantly provides a framework for thoughtfully engineering their direct, as well as their emergent, effects.

By Edward Boyden, Brian Allen, Doug Fritz
From Technology Review

An Ultrasensitive Explosives Detector

A nanowire sensor made by researchers at Tel Aviv University can detect extremely small traces of commonly used explosives in liquid or air in a few seconds. The device is a thousand times more sensitive than the current gold standard in explosives detection: the sniffer dog.

Bomb detector: Chemically treated silicon nanowires at the center of this glass chip change conductance when exposed to minute traces of explosives in air or liquids.

The sensor could be cheaply produced and incorporated into a handheld instrument for detecting buried landmines or concealed explosives at security checkpoints, according to Fernando Patolsky, a chemistry professor who led the work, which was published in the journal Angewandte Chemie last week. The researchers are developing a portable instrument based on the technology. Their first prototype is about the size of a brick. "We could put the sensors everywhere in an airport, in every corner of a shopping mall," Patolsky says.

Trained dogs have historically been used to sniff out bombs and landmines because they can smell explosives at concentrations of just a few parts per billion. But it takes tens of thousands of dollars to train and maintain a sniffer dog, so handheld detectors promise a cheaper and more portable solution.

The nanowire array isn't the first device to achieve canine levels of explosive-sniffing sensitivity. A system developed by ICx Technologies, based in Arlington, Virginia, can detect vapors given off by explosives with a sensitivity matching that of a canine nose. Instead of nanowires, the ICx system uses polymers that glow or stop glowing in response to traces of explosive in a vapor in a few seconds. This device is being used in battlefields in Iraq and Afghanistan, and the U.S. Transportation Security Administration recently started using it at airports, but most airports still rely on microwave oven-sized instruments that take minutes, rather than seconds, to detect explosives in swabs taken from luggage or passengers' skin.

The new Tel Aviv University device is a thousand times more sensitive than any existing detector, including the ICx device. The researchers have used it to detect TNT and the plastic explosives RDX and PETN at concentrations lower than one part per trillion in a few seconds.

The device consists of a chip containing an array of silicon nanowires coated with an organic amine compound that binds to the explosives, changing the wires' conductance. Patolsky says the trick is to grow nanowires at desired spots and along a defined direction on the chip. Once the array is grown, the researchers coat the nanowires and deposit the electrodes. In laboratory tests, the chip was exposed to liquid solutions containing explosives, as well as to TNT vapors mixed with air. The researchers are working on packaging the chip with microfluidics pumps and electronics to make a low-power, portable detector.

Aimee Rose, a researcher at ICx Technologies, says that using nanowire arrays for sensing shows great promise because the method is so sensitive, allowing potential developers to "put many sensors in a small footprint," but the method will have to prove its mettle with real-world vapor samples.

MIT chemistry professor Timothy Swager agrees, pointing out that currently the array only works convincingly when detecting explosives in a solution. It is less effective at picking out vapors of explosives from a person's skin or belongings, he says, noting that the array works best when TNT vapor-containing air samples are blown directly at the nanowires.

Harvard University chemistry professor Charles Lieber says that the nanowire sensor approach is much more sensitive than the ICx polymer technology, which was developed in Swager's lab, but it has not yet been proven the way the ICx technology has. Lieber, who focuses on biomedical applications of nanowire transistors, says the Israeli research shows that nanowire sensing could be applied for explosives detection and could be readily commercialized. "There are no limitations to the methodology from my perspective...it has potential to revolutionize explosives detection."

Patolsky and colleagues are now making larger nanowire arrays coated with different molecules for detecting other kinds of explosives.

By Prachi Patel
From Technology Review

Personal Exoskeletons for Paraplegics

Exoskeletons--wearable, motorized machines that can assist a person's movements--have largely been confined to movies or military use, but recent advances might soon bring the devices to the homes of people with paralysis. 

So far, exoskeletons have been used to augment the strength of soldiers or to help hospitalized stroke patients relearn how to walk. Now researchers at the University of California, Berkeley, have demonstrated an exoskeleton that is portable and lets paraplegics walk in a relatively natural gait with minimal training. That could be an improvement for people with spinal-cord injuries who spend a lot of time in wheelchairs, which can cause sores or bone deterioration. 

Existing medical exoskeletons for patients who have lost function in their lower extremities have either not been equipped with power sources or have been designed for tethered use in rehabilitation facilities, to correct and condition a patient's gait. 

In contrast, the Berkeley exoskeleton combines "the freedom of not being tethered with a natural gait," says Katherine Strausser, PhD candidate and one of the lead researchers of the Berkeley project. Last week at the 2010 ASME Dynamic System and Control Conference in Cambridge, Massachusetts, Strausser presented experimental results from four paraplegics who used the exoskeleton. 

Assisted Steps: A patient with paralysis stands with the aid of the Berkeley exoskeleton. The exoskeleton moves the patient’s hips and knees to imitate a natural walk.

Other mobile exoskeletons--like those developed by companies such as Rex Bionics or Cyberdene--don't try to emulate a natural gait, Strausser says. Because walking is a dynamic motion that is essentially falling forward, Strausser says, many designs opt for a shuffle instead of a natural gait, because "it's safer and a lot easier." However, emulating a natural gait mimics the efficiency of natural walking and doesn't strain the hips, Strausser says.

The Berkeley device, which houses a computer and battery pack, straps onto a user's back like a backpack and can run six to eight hours on one charge. Pumps drive hydraulic fluid to move the hip and knees at the same time, so that the hip swings through a step as one knee bends. The device plans walking trajectories based on data (about limb angles, knee flexing, and toe clearance) gathered from people's natural gaits. Pressure sensors in each heel and foot make sure both feet aren't leaving the ground at the same time. 

The Berkeley program was successful. The four paraplegics described in Strausser's talk, three of whom had been in wheelchairs for years, were able to walk with the device after only two hours of training. "It's very easy to walk in," says Strausser. "It moves your leg exactly like you would in your normal gait." To begin a step, the exoskeleton requires a user to press a button on a remote control; the team is working on a more intuitive interface. 

When designing the medical exoskeleton--which uses parts from two military exoskeletons--the team needed controllers and a design that takes into account the user's lack of strength. While military exoskeletons work with a soldier's motion to add strength, medical exoskeletons do the opposite, fighting against incorrect gaits or performing the gait, explains Strausser. "The biggest problem is holding a person into the 'exo' safely and securely," she says. After field testing at the University of Virginia's Clinical Motion Analysis and Motor Performance Laboratory last year, the group developed a proprietary design that keeps users from sliding out of the exoskeleton and distributes the weight of the 80-pound machine. The group plans to make the device lighter and to make a low-cost version that patients can use in their homes. (The research group is affiliated with a company, Berkeley Bionics, that plans to begin selling a form of the technology.)

"Overall I think it's a very good device," says Panagiotis Artemiadis, an MIT researcher who heard Strausser's talk. He is developing an exoskeleton called the MIT-SkyWalker that helps stroke patients practice walking on a machine that resembles a treadmill. He says he can picture the Berkeley device being used by patients in their homes, particularly if the researchers reduce the weight.

Other mobile exoskeletons to help paralyzed people are just starting to come to market. German company Argo Medical Technologies is releasing its first product, a 100,000-euro exoskeleton intended for use in rehab centers, in October. The company plans to release a home version soon after for about half the price. Unlike the Berkeley exoskeleton, this one, dubbed ReWalk, takes the user a few weeks to learn. "It's like getting a driver's license," says John Frijters, vice president of business development for Argo. ReWalk is customizable, able to tailor the sensitivity of the sensors, step length, and stride depending on how the user feels. It weighs about 45 pounds and runs eight to 10 hours on a charge, according to Frijters. 

While ReWalk doesn't yet have data to share on the advantages of using exoskeletons, "dozens" of patients have tested ReWalk, and "they all enjoy the benefit of being active," says Frijters. "They have the opportunity to get up from the wheelchair and walk again. It's very emotional."

By Kristina Grifantini 
From Technology Review

Hungarian Stringbike Prototype Swaps Chain for Wires

Designers from the Schwinn Csepel Zrt think they've perfected the bicycle by removing that messy chain and replacing it with a hipster. Ha! Sorry, I meant cable. Two, to be precise.


The assembly can be seen in the image above (the taut cable trails off to the left). The rest of the pedal assembly includes unique kidney-shaped eccentric discs, a swinging unit and the transmission.

The kidney discs serve the same function as a traditional circle-shaped gear, although in the Stringbike's case different sized and shaped discs can be installed that change performance (e.g. racing or touring).

Next is the swing unit, which is comprised of "oppositely swinging arms arranged for swinging movement around a pivoted auxiliary axis," reports Hungarian Ambiance.


Finally, the transmission. Much like a traditional 10-speed, this controls speed and low and high gears. Using a controlled slide, the Stringbike can change the height of the two pulleys employed by the system, which results in changed gears. I think.

It seems there actually a few legitimate benefits of such a system, including better maneuverability of winding streets at speed. Also, as I can attest to as a person who enjoys cycling, there is no oil or grease to deal with (i.e. clean clothes and hands after servicing). 
 

Why Thinking of Nothing Can Be So Tiring: Brain Wolfs Energy to Stop Thinking

Mathematicians at Case Western Reserve University may have part of the answer.
They've found that just as thinking burns energy, stopping a thought burns energy -- like stopping a truck on a downhill slope.

"Maybe this explains why it is so tiring to relax and think about nothing," said Daniela Calvetti, professor of mathematics, and one of the authors of a new brain study. Their work is published in an advanced online publication of Journal of Cerebral Blood Flow & Metabolism.

 Mathematicians have found that the brain uses a substantial amount of energy to halt the flow of information between neurons.

Opening up the brain for detailed monitoring isn't practical. So, to understand energy usage, Calvetti teamed with Erkki Somersalo, professor of mathematics, and Rossana Occhipinti, who used this work to help earn a PhD in math last year and is now a postdoctoral researcher in the department of physiology and biophysics at the Case Western Reserve School of Medicine. They developed equations and statistics and built a computer model of brain metabolism.

The computer simulations for this study were obtained by using Metabolica, a software package that Calvetti and Somersalo have designed to study complex metabolic systems. The software produces a numeric rendering of the pathways linking excitatory neurons that transmit thought or inhibitory neurons that put on the brakes with star-like brain cells called astrocytes. Astrocytes cater essential chemicals and functions to both kinds of neurons.

To stop a thought, the brain uses inhibitory neurons to prevent excitatory neurons from passing information from one to another.

"The inhibitory neurons are like a priest saying, 'Don't do it,'" Calvetti said. The "priest neurons" block information by releasing gamma aminobutyric acid, commonly called GABA, which counteracts the effect of the neurotransmitter glutamate by excitatory neurons.

Glutamate opens the synaptic gates. GABA holds the gates closed.
"The astrocytes, which are the Cinderellas of the brain, consume large amounts of oxygen mopping up and recycling the GABA and the glutamate, which is a neurotoxin," Somersalo said.
More oxygen requires more blood flow, although the connection between cerebral metabolism and hemodynamics is not fully understood yet.

All together, "It's a surprising expense to keep inhibition on," he said.
The group plans to more closely compare energy use of excitatory and inhibitory neurons by running simultaneous simulations of both processes.

The researchers are plumbing basic science but their goal is to help solve human problems.
Brain disease or damaging conditions are often difficult to diagnose until advanced stages. Most brain maladies, however, are linked to energy metabolism and understanding what is the norm may enable doctors to detect problems earlier.

The toll inhibition takes may, in particular, be relevant to neurodegenerative diseases. "And that is truly exciting" Calvetti said.

From sciencedaily.com

Magical BEANs: New Nano-Sized Particles Could Provide Mega-Sized Data Storage

"Phase changes in BEANs, switching them from crystalline to amorphous and back to crystalline states, can be induced in a matter of nanoseconds by electrical current, laser light or a combination of both," says Daryl Chrzan, a physicist who holds joint appointments with Berkeley Lab's Materials Sciences Division and UC Berkeley's Department of Materials Science and Engineering. "Working with germanium tin nanoparticles embedded in silica as our initial BEANs, we were able to stabilize both the solid and amorphous phases and could tune the kinetics of switching between the two simply by altering the composition."

 This schematic shows enthalpy curves sketched for the liquid, crystalline and amorphous phases of a new class of nanomaterials called "BEANs" for Binary Eutectic-Alloy Nanostructures.

Chrzan is the corresponding author on a paper reporting the results of this research which has been published in the journal NanoLetters titled "Embedded Binary Eutectic Alloy Nanostructures: A New Class of Phase Change Materials."

Co-authoring the paper with Chrzan were Swanee Shin, Julian Guzman, Chun-Wei Yuan, Christopher Liao, Cosima Boswell-Koller, Peter Stone, Oscar Dubon, Andrew Minor, Masashi Watanabe, Jeffrey Beeman, Kin Yu, Joel Ager and Eugene Haller.

"What we have shown is that binary eutectic alloy nanostructures, such as quantum dots and nanowires, can serve as phase change materials," Chrzan says. "The key to the behavior we observed is the embedding of nanostructures within a matrix of nanoscale volumes. The presence of this nanostructure/matrix interface makes possible a rapid cooling that stabilizes the amorphous phase, and also enables us to tune the phase-change material's transformation kinetics."

A eutectic alloy is a metallic material that melts at the lowest possible temperature for its mix of constituents. The germanium tin compound is a eutectic alloy that has been considered by the investigators as a prototypical phase-change material because it can exist at room temperature in either a stable crystalline state or a metastable amorphous state. Chrzan and his colleagues found that when germanium tin nanocrystals were embedded within amorphous silica the nanocrystals formed a bilobed nanostructure that was half crystalline metallic and half crystalline semiconductor.

"Rapid cooling following pulsed laser melting stabilizes a metastable, amorphous, compositionally mixed phase state at room temperature, while moderate heating followed by slower cooling returns the nanocrystals to their initial bilobed crystalline state," Chrzan says. "The silica acts as a small and very clean test tube that confines the nanostructures so that the properties of the BEAN/silica interface are able to dictate the unique phase-change properties."

While they have not yet directly characterized the electronic transport properties of the bilobed and amorphous BEAN structures, from studies on related systems Chrzan and his colleagues expect that the transport as well as the optical properties of these two structures will be substantially different and that these difference will be tunable through composition alterations.

"In the amorphous alloyed state, we expect the BEAN to display normal, metallic conductivity," Chrzan says. "In the bilobed state, the BEAN will include one or more Schottky barriers that can be made to function as a diode. For purposes of data storage, the metallic conduction could signify a zero and a Schottky barrier could signify a one."

Chrzan and his colleagues are now investigating whether BEANs can sustain repeated phase-changes and whether the switching back and forth between the bilobed and amorphous structures can be incorporated into a wire geometry. They also want to model the flow of energy in the system and then use this modeling to tailor the light/current pulses for optimum phase-change properties.

The in-situ Transmission electron microscopy characterizations of the BEAN structures were carried out at Berkeley Lab's National Center for Electron Microscopy, one of the world's premier centers for electron microscopy and microcharacterization.

Berkeley Lab is a U.S. Department of Energy (DOE) national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the DOE Office of Science.

From sciencedaily.com

Geckos Inspire New Method to Print Electronics on Complex Surfaces

Researchers from Northwestern University and the University of Illinois at Urbana-Champaign designed a clever square polymer stamp that allows them to vary its adhesion strength. The stamp can easily pick up an array of electronic devices from a silicon surface and move and print them on a curved surface.

The research will be published Sept. 20 by the Proceedings of the National Academy of Sciences (PNAS).
"Our work proposes a very robust method to transfer and print electronics on complex surfaces," said Yonggang Huang, Joseph Cummings Professor of Civil and Environmental Engineering and Mechanical Engineering at Northwestern's McCormick School of Engineering and Applied Science.

Gecko feet clinging on glass. Geckos are masters at sticking to surfaces of all kinds and easily unsticking themselves, too. Inspired by these lizards, a team of engineers has developed a reversible adhesion method for printing electronics on a variety of tricky surfaces such as clothes, plastic and leather.


Huang, co-corresponding author of the PNAS paper, led the theory and design work at Northwestern. His colleague John Rogers, the Flory-Founder Chair Professor of Materials Science and Engineering at the University of Illinois, led the experimental and fabrication work. Rogers is a co-corresponding author of the paper.

Key to the square and squeezable polymer stamp are four pyramid-shaped tips on the stamp's bottom, one in each corner. They mimic, in a way, the micro- and nano-filaments on the gecko's foot, which the animal uses to control adhesion by increasing or decreasing contact area with a surface.

Pressing the stamp against the electronics causes the soft tips to collapse up against the stamp's body, maximizing the contact area between the stamp and the electronics and creating adhesion. The electronics are picked up in a complete batch, and, with the force removed, the soft tips snap back to their original shape. The electronics now are held in place by just the four tips, a small contact area. This allows the electronics to be easily transferred to a new surface.
"Design of the pyramid tips is very important," Huang said. "The tips have to be the right height. If the tips are too large, they can't pick up the target, and if the tips are too small, they won't bounce back to their shape."
The researchers conducted tests of the stamp and found the changes in contact area allow the stamp's adhesion strength to vary by 1,000 times. They also demonstrated their method can print layers of electronics, enabling the development of a variety of complex devices.
The National Science Foundation and the U.S. Department of Energy supported the work.
The title of the PNAS paper is "Microstructured Elastomeric Surfaces with Reversible Adhesion and Examples of Their Use in Deterministic Assembly by Transfer Printing." In addition to Huang and Rogers, other authors of the paper are Jian Wu (a postdoctoral fellow at Northwestern), Seok Kim, Andrew Carlson, Sung Hun Jin, Anton Kovalsky, Paul Glass, Zhuangjian Liu, Numair Ahmed, Steven L. Elgan, Weiqiu Chen, Placid M. Ferreira and Metin Sitti.

From sciencedaily.com

Sensors Use Building's Electrical Wiring as Antenna

Wireless sensors scattered throughout a building can monitor everything from humidity and temperature to air quality and light levels. This seems like a good idea--until you consider the hassle and cost of replacing the sensors' batteries every couple of years. The problem is that most wireless sensors transmit data in a way that drains battery power.

Researchers at the University of Washington have come up with a way to reduce the amount of power a sensor uses to transmit data by leveraging the electrical wiring in a building's walls as an antenna that propagates the signal. The approach extends a wireless sensor's range, and it means that its battery can last up to five times longer than existing sensors, say the researchers. 

 Bit dribbler: This sensor sends data wirelessly to the copper wiring within a building’s walls. The wiring transmits the signal to a base station plugged into an outlet.

The technology, called Sensor Nodes Utilizing Powerline Infrastructure (SNUPI), sends a small trickle of data wirelessly at a frequency that resonates with the copper wiring in a building's walls, says Shwetak Patel, professor of computer science and electrical engineering at the University of Washington. The copper wiring, which can be up to 15 feet away from the sensors, picks up the signal and acts as a giant receiving antenna, transmitting the data at 27 megahertz to a base station plugged into an electrical outlet somewhere in the building. 

"The powerline has an amplification effect," says Patel. While many low-power sensors only have a range of a few feet, he says, his prototype sensors can cover most of a 3,000-square-foot home. In most wireless sensor schemes, Patel says, walls impede transmission of sensor data, but with SNUPI, "the more walls in the home, the better our system works." A paper describing the work will be presented at the Ubiquitous Computing conference in Copenhagen, Denmark, in September.

"Most academic research on in-building sensor nodes has looked at building infrastructure as a problem," says Matt Reynolds, professor of electrical and computer engineering at Duke University. Patel's work is interesting because it "turns the problem on its head," he says. "The building's wiring is part of the solution rather than part of the problem."

Using powerlines to transmit data is not a new idea. Broadband over powerlines, or BPL, uses the power grid to provide Internet connectivity. But using powerlines to extend the range of ambient sensors, and reduce their power consumption, is novel.

The researchers' prototype uses less than one milliwatt of power when transmitting data to the powerline antenna, and less than 10 percent of that power is used for communication. Future versions, says Patel, will reduce the amount of power the sensor uses for computation, and will also include a receiving antenna for two-way communication between the sensors and the base station. This could enable the sensor to accept confirmation that all of the data has been received properly.

Patel, who founded an in-home energy-monitoring startup called Zenzi that was sold to Belkin earlier this year, has launched another company to commercialize SNUPI. He suspects that the approach can be used for more than monitoring air quality in homes--it could also be used to collect data from wearable sensors or implanted medical devices. In fact, Patel says, preliminary studies have shown that the popular pedometer called FitBit, which sends data to a base station wirelessly, could last for a year on a single charge, rather than its current duration of 14 days, using the SNUPI scheme.

By Kate Greene 
From Technology Review

Artificial Ovary Could Help Infertile Women

Researchers at Brown University have created an "artificial human ovary" using a tissue engineering approach that they hope will one day allow scientists to mature human eggs in a laboratory.

In the near term, an artificial ovary will enable researchers to better explore the impact of environmental toxins or fertility-enhancing substances on human fertility. It could also aid the development of new forms of contraceptives and the study of ovarian cancer.

 Reproductive research: This artificial human ovary surrounds granulosa cell spheres, which are marked with fluorescent green dye.

Further down the line, it could also help women whose ovaries are damaged because of chemotherapy, radiation, or illness, according to a paper published in the current issue of Journal of Assisted Reproduction and Genetics. Today, those women have limited opportunities for childbirth: either a hurried in-vitro fertilization cycle that leads to a handful of frozen eggs, or freezing ovarian tissue in the hopes that healthy eggs will someday be able to be matured. 

An artificial ovary, where immature eggs could be harvested by the thousands and then matured at will in the laboratory, would open up huge possibilities for the one in a 1,000 women who need it, says the paper's first author, Stephan Krotz, who was a graduate student at Brown when he worked on the paper.

The artificial ovary marks the first time researchers have successfully created a three-dimensional environment that contains the three main types of ovarian cells: theca cells, granulosa cells, and the eggs, known as oocytes. The paper's lead researcher is Sandra Carson, a professor of obstetrics and gynecology at Brown and Woman and Infants Hospital of Rhode Island. 

Alan B. Copperman, director of infertility at Mt. Sinai Medical Center in New York, says clinical benefits are years and many scientific hurdles away, but he's impressed by the research potential of the group's work. "The concept of creating an artificial three-dimensional environment, and the fact that we can take out immature eggs and let them grow and mature into viable eggs, is really exciting," he says.

Copperman says the artificial ovary could serve as a model to help researchers better understand the ovarian aging process, the focus of much of his research. "If we can establish a viable testing environment, we can learn more about how to optimize eggs, and discriminate good from bad eggs."

The Brown researchers' innovation was using a honeycomb-shaped mold to support the egg. Human eggs are too large to be grown without some kind of support structure. "If you try to grow it by itself, in a dish, it basically collapses on itself," says Krotz, now a reproductive endocrinologist and fertility specialist at the Advanced Fertility Center of Texas.

The researchers broke ovarian cells out of human tissue using enzymes, and poured them into a mold made of agar, a gelatinous substance usually derived from algae. The different types of cells then assembled themselves into a honeycomb shape, with the theca and granulosa cells forming the structure. The egg cells, or oocytes, were inserted inside and bathed with hormones to stimulate the theca cells to produce androgen, and the granulosa cells to make estrogen.

"We took a different tack to rely on the inherent adhesiveness of cells to drive self-assembly," says Jeffrey Morgan, codirector of the Center for Biomedical Engineering at Brown, who led this aspect of the research. "In that nonadhesive environment, the cells will stick to each other and self-assemble a three-dimensional structure, and it conforms to the shape of our mold."

Researchers had assumed that, if allowed to self-assemble, cells would form a sphere, but Morgan says he showed that they can also create more complex forms with a little prompting.

Kim L. Thornton, a reproductive endocrinologist at Boston IVF, one of the nation's largest fertility centers, says it's tricky to re-create in a lab all of the activities that go on in a woman's ovaries. "One of the challenges with maturation is, there are lot of things that go on locally that may affect the ability of oocytes to become mature," she says. "We can't duplicate all of those conditions" in a lab dish. However, Thornton says, the Brown model "is interesting, and it's certainly promising."

Carson says that now that the team has created the model, she wants to go back and look more closely at how it functions. She would like to identify various proteins involved in egg maturation, and be able to explore whether those proteins can be altered as a means of contraception. "We could also theoretically find something that might be important in the development of ovarian cancer," she says.

The work can also be used to test for toxic effects from everyday products, such as plastics and insecticides, as well as medications--"anything we might be able to test against the control," Carson says. "We're not there yet, but I think this is going to be the most powerful use of the model."

By Karen Weintraub
From Technology Review

Better Bugs to Make Plastics

A startup that has successfully engineered bacteria to make common industrial chemicals is now using its technology to engineer organisms to make renewable fuel. 

OPX Biotechnologies, based in Boulder, Colorado, says its strains of E. coli can be used to convert sugar to acrylic acid--a key component of paints, diapers, and adhesives--at lower costs than making it from petroleum. The bacteria-based process produces 75 percent fewer carbon-dioxide emissions than making the same amount from oil, and a single commercial plant using the process could reduce petroleum consumption by over 500,000 barrels per year. 

 Strain testing: An OPX biochemist prepares new culture media for developing new strains of microorganisms.

The technology has been demonstrated in a pilot plant with a 200-liter fermentation tank, and the company plans to build a 20,000 liter system starting next year. Then it plans to build a commercial plant in 2014 that can produce 100 million pounds of acrylic. So far the company has raised $22.4 million of venture capital. The company is also working on a process that employs bacteria to convert carbon dioxide and hydrogen into diesel fuel. The U.S. Department of Energy's Advanced Research Projects Agency for Energy (ARPA-E) recently gave the company a $6 million grant to demonstrate the technology in a pilot plant within three years. 

The company is one of dozens of startups that have sprung up to make chemicals from plant matter rather than petroleum. It's something researchers have been trying to do for decades, but only recently have they had any success with commercial-scale production. For example, in 2007, DuPont started commercial production of propanediol (used for plastics and cosmetics) made from corn sugar. 

OPX's approach to engineering strains of microorganisms is faster and cheaper than conventional methods, says CEO Charles Eggert. The company has developed a novel way of generating mutations that lets it track which genes are responsible for performance changes. Rather than hoping that all of the best changes are combined at random in a single strain, which is the case with the conventional approach, the OPX researchers use this detailed information to select genetic changes from a variety of randomly created strains and combine them into one. 

Commercializing the technology will be challenging. The company still needs to produce acrylic acid at costs at or below the costs for conventional petroleum-based acrylic acid. Eggert says this is possible based on the performance of OPX's organisms, but costs are often greater than expected. For one thing, it's difficult for bio-based approaches to produce a product with the 99.99 percent purity levels that industrial customers require, says Robert Kirschbaum, vice president of open innovation at Netherlands-based DSM, a major producer of chemicals, including acrylic acid. If the chemical is not pure enough, a company must buy expensive purification equipment, which throws off its cost estimates. 

For its ARPA-E diesel project, OPX is using its technology to engineer a bacteria, cupriavidus necator, to produce fatty acids to make biodiesel. This could be mixed with petroleum-based diesel for use in vehicles. OPX could use carbon dioxide emitted from power plants and hydrogen from a variety of sources, including natural gas. The goal of the ARPA-E project is to use hydrogen generated from renewable sources, such as water splitting using electricity from solar panels. 

By Kevin Bullis 
From Technology Review

Doubling Lithium-Ion Battery Storage

Battery startup Amprius says it has developed batteries capable of storing twice as much energy as anything on the market today, thanks to nanostructured silicon electrodes. The company says it is partnering with several as-yet unnamed major consumer electronics manufacturers to bring the batteries to market by early 2012. The batteries will allow portable electronics to run 40 percent longer without a recharge.

Amprius also says it is working with several major automakers who are evaluating the electrode materials for use in batteries for electric vehicles. The company is not yet disclosing these commercial partners, either.

 Battery boost: Amprius’s silicon-nanowire battery anodes are made inside a vacuum chamber (top image). The company has combined them with conventional lithium-ion cathodes and electrolytes to make batteries that store twice as much energy as those on the market today (bottom image).


When a lithium-ion battery is charged, lithium ions move from its cathode to its anode, while electrons flow in through an external electrical circuit. The process is reversed during discharge. The more lithium the anode can take in, the more total energy the battery can store, and the longer it can last. For the past 30 years, lithium-ion batteries have used carbon anodes. With no new materials, the total energy storage of these batteries has improved by only about 7 percent every year due to incremental engineering refinements.

Silicon has 10 times the theoretical lithium storage capacity of the carbon used to make battery anodes, but it's been difficult for researchers to make it into a practical battery electrode. As large volumes of lithium ions move in and out of the material during charge and discharge, silicon swells and cracks.

In 2007, Yi Cui, a Stanford University materials science and engineering professor, demonstrated that nanostructured silicon films could be charged and discharged of lithium without experiencing these mechanical problems, making a potential anode material that could as much as double the energy storage of lithium batteries.

In the 18 months since Amprius was founded, researchers at the company have built on Cui's research and have demonstrated that the silicon anodes can be used in practical batteries. Silicon nanowires, which are vertically arrayed and tethered but flexible, are used to make the battery anodes.

As the nanowires take in lithium, they can swell and bend to accommodate it, without breaking. However, this isn't mechanically stable enough. Amprius has addressed this problem by giving the nanowires a thin, reinforcing metal core that the company likens to rebar (the steel strut used to reinforce concrete structures). This "rebar" prevents the anode from expanding and contracting too much. In testing, the silicon anodes can store three times more energy than carbon anodes by weight.

Prototype batteries have been tested through 250 charging cycles and have been shown to store twice the energy of a conventional battery. To be a serious contender for use in electric vehicles, however, the batteries will need to go through 3,000 charging cycles, says Ryan Kottenstette, the company's director of business development. 

Amprius CEO Kang Sun says the company is moving aggressively to bring its batteries to electric vehicles. "We are in a hurry, because electrification is moving forward faster than anyone thought," he says. Sun, the former president of Chinese solar manufacturer JA Solar, notes that there are already about 80 electric-vehicle makers in China. "We have to be fast," he says. The company expects to disclose some automaker partnerships in the next few months.

Conventional carbon anodes are made using large roll-to-roll systems. Kottenstette acknowledges that the vacuum deposition technique used for silicon nanowires will be more expensive. However, once potential manufacturing issues are ironed out, the company expects the boost in storage capacity to make up for the increased cost. The company is also working on a roll-to-roll vapor-deposition system. "Making it compatible with current processes is important to us," says Kottenstette. 

By Katherine Bourzac 
From Technology Review

Funneling Solar Energy: Antenna Made of Carbon Nanotubes Could Make Photovoltaic Cells More Efficient

"Instead of having your whole roof be a photovoltaic cell, you could have little spots that were tiny photovoltaic cells, with antennas that would drive photons into them," says Michael Strano, the Charles and Hilda Roddey Associate Professor of Chemical Engineering and leader of the research team.

Strano and his students describe their new carbon nanotube antenna, or "solar funnel," in the Sept. 12 online edition of the journal Nature Materials. Lead authors of the paper are postdoctoral associate Jae-Hee Han and graduate student Geraldine Paulus.

 This filament containing about 30 million carbon nanotubes absorbs energy from the sun as photons and then re-emits photons of lower energy, creating the fluorescence seen here. The red regions indicate highest energy intensity, and green and blue are lower intensity.

Their new antennas might also be useful for any other application that requires light to be concentrated, such as night-vision goggles or telescopes.

Solar panels generate electricity by converting photons (packets of light energy) into an electric current. Strano's nanotube antenna boosts the number of photons that can be captured and transforms the light into energy that can be funneled into a solar cell.

The antenna consists of a fibrous rope about 10 micrometers (millionths of a meter) long and four micrometers thick, containing about 30 million carbon nanotubes. Strano's team built, for the first time, a fiber made of two layers of nanotubes with different electrical properties -- specifically, different bandgaps.

In any material, electrons can exist at different energy levels. When a photon strikes the surface, it excites an electron to a higher energy level, which is specific to the material. The interaction between the energized electron and the hole it leaves behind is called an exciton, and the difference in energy levels between the hole and the electron is known as the bandgap.

The inner layer of the antenna contains nanotubes with a small bandgap, and nanotubes in the outer layer have a higher bandgap. That's important because excitons like to flow from high to low energy. In this case, that means the excitons in the outer layer flow to the inner layer, where they can exist in a lower (but still excited) energy state.

Therefore, when light energy strikes the material, all of the excitons flow to the center of the fiber, where they are concentrated. Strano and his team have not yet built a photovoltaic device using the antenna, but they plan to. In such a device, the antenna would concentrate photons before the photovoltaic cell converts them to an electrical current. This could be done by constructing the antenna around a core of semiconducting material.

The interface between the semiconductor and the nanotubes would separate the electron from the hole, with electrons being collected at one electrode touching the inner semiconductor, and holes collected at an electrode touching the nanotubes. This system would then generate electric current. The efficiency of such a solar cell would depend on the materials used for the electrode, according to the researchers.

Strano's team is the first to construct nanotube fibers in which they can control the properties of different layers, an achievement made possible by recent advances in separating nanotubes with different properties.

While the cost of carbon nanotubes was once prohibitive, it has been coming down in recent years as chemical companies build up their manufacturing capacity. "At some point in the near future, carbon nanotubes will likely be sold for pennies per pound, as polymers are sold," says Strano. "With this cost, the addition to a solar cell might be negligible compared to the fabrication and raw material cost of the cell itself, just as coatings and polymer components are small parts of the cost of a photovoltaic cell."

Strano's team is now working on ways to minimize the energy lost as excitons flow through the fiber, and on ways to generate more than one exciton per photon. The nanotube bundles described in the Nature Materials paper lose about 13 percent of the energy they absorb, but the team is working on new antennas that would lose only 1 percent.

Funding: National Science Foundation Career Award, MIT Sloan Fellowship, the MIT-Dupont Alliance and the Korea Research Foundation.

From sciencedaily.com