Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........

World's First Anti-Laser Built

Conventional lasers, which were first invented in 1960, use a so-called "gain medium," usually a semiconductor like gallium arsenide, to produce a focused beam of coherent light -- light waves with the same frequency and amplitude that are in step with one another.

 In the anti-laser, incoming light waves are trapped in a cavity where they bounce back and forth until they are eventually absorbed. Their energy is dissipated as heat.

Last summer, Yale physicist A. Douglas Stone and his team published a study explaining the theory behind an anti-laser, demonstrating that such a device could be built using silicon, the most common semiconductor material. But it wasn't until now, after joining forces with the experimental group of his colleague Hui Cao, that the team actually built a functioning anti-laser, which they call a coherent perfect absorber (CPA).

The team, whose results appear in the Feb. 18 issue of the journal Science, focused two laser beams with a specific frequency into a cavity containing a silicon wafer that acted as a "loss medium." The wafer aligned the light waves in such a way that they became perfectly trapped, bouncing back and forth indefinitely until they were eventually absorbed and transformed into heat.

Stone believes that CPAs could one day be used as optical switches, detectors and other components in the next generation of computers, called optical computers, which will be powered by light in addition to electrons. Another application might be in radiology, where Stone said the principle of the CPA could be employed to target electromagnetic radiation to a small region within normally opaque human tissue, either for therapeutic or imaging purposes.

Theoretically, the CPA should be able to absorb 99.999 percent of the incoming light. Due to experimental limitations, the team's current CPA absorbs 99.4 percent. "But the CPA we built is just a proof of concept," Stone said. "I'm confident we will start to approach the theoretical limit as we build more sophisticated CPAs." Similarly, the team's first CPA is about one centimeter across at the moment, but Stone said that computer simulations have shown how to build one as small as six microns (about one-twentieth the width of an average human hair).

The team that built the CPA, led by Cao and another Yale physicist, Wenjie Wan, demonstrated the effect for near-infrared radiation, which is slightly "redder" than the eye can see and which is the frequency of light that the device naturally absorbs when ordinary silicon is used. But the team expects that, with some tinkering of the cavity and loss medium in future versions, the CPA will be able to absorb visible light as well as the specific infrared frequencies used in fiber optic communications.

It was while explaining the complex physics behind lasers to a visiting professor that Stone first came up with the idea of an anti-laser. When Stone suggested his colleague think about a laser working in reverse in order to help him understand how a conventional laser works, Stone began contemplating whether it was possible to actually build a laser that would work backwards, absorbing light at specific frequencies rather than emitting it.

"It went from being a useful thought experiment to having me wondering whether you could really do that," Stone said. "After some research, we found that several physicists had hinted at the concept in books and scientific papers, but no one had ever developed the idea."


Meka's M-1 Mobile Manipulator, a cuter Cody the spit bath robot

Remember Cody? The robot from Georgia Tech designed to give spit baths to the elderly and crippled? Well, Cody's got an attractive younger cousin named M-1, and for $340,000 this fine piece of machinery could be all yours. Built by San Francisco-based Meka Robotics, the M-1 Mobile Manipulator (based on Cody) runs on a combination of ROS and proprietary software and sports a Kinect-compatible head with a five megapixel Ethernet camera, arms with six-axis force-torque sensors at the wrist, force controlled grippers, and an omnidirectional mobile base. If the standard features don't fit your needs, Meka offers various upgrades, including four-fingered hands and humanoid heads, complete with expressive eyelids (à la Meka's Dreamer), ears, and additional sensor compatibility. These add-ons will of course cost you, but we think its worth it to have those big translucent eyes staring back at you. A rather touching demo after the jump. 

By Christopher Trout
From engadget

What Watson Can Learn From the Human Brain

Watson won. That set of microchips will soon join the pantheon of machines that have defeated humans, from the steam-powered hammer that killed John Henry to the Deep Blue supercomputer that battled Kasparov. Predictably enough, the victory inspired a chorus of “computer overlord” anxieties, as people used the victory of microchips to proclaim the decline of the human mind, or at least the coming of the singularity.

Personally, I was a little turned off by the whole event — it felt like a big marketing campaign for IBM and Jeopardy. Nevertheless, I think the real moral of Watson is that our brain, even though it lost the game, is a pretty stunning piece of meaty machinery. Although we always use the latest gadget as a metaphor for the black box of the mind — our nerves were like telegraphs before they were like telephone exchanges before they were like computers — the reality is that our inventions are pretty paltry substitutes. Natural selection has nothing to worry about.

Let’s begin with energy efficiency. One of the most remarkable facts about the human brain is that it requires less energy (12 watts) than a light bulb. In other words, that loom of a trillion synapses, exchanging ions and neurotransmitters, costs less to run than a little incandescence. Compare that to Deep Blue: when the machine was operating at full speed, it was a fire hazard, and required specialized heat-dissipating equipment to keep it cool. Meanwhile, Kasparov barely broke a sweat.

The same lesson applies to Watson. I couldn’t find reliable information on its off-site energy consumption, but suffice to say it required many tens of thousands of times as much energy as all the human brains on stage combined. While this might not seem like a big deal, evolution long ago realized that we live in a world of scarce resources. Evolution was right. As computers become omnipresent in our lives — I’ve got one dissipating heat in my pocket right now — we’re going to need to figure out how to make them more efficient. Fortunately, we’ve got an ideal prototype locked inside our skull.

The second thing Watson illustrates is the power of metaknowledge, or the ability to reflect on what we know. As Vaughan Bell pointed out a few months ago, this is Watson’s real innovation:

Answering this question needs pre-existing knowledge and, computationally, two main approaches. One is constraint satisfaction, which finds which answer is the ‘best fit’ to a problem which doesn’t have mathematically exact solution; and the other is a local search algorithm, which indicates when further searching is unlikely to yield a better result – in other words, when to quit computing and give an answer – because you can always crunch more data.

Our brain comes preprogrammed with metaknowledge: We don’t just know things — we know we know them, which leads to feelings of knowing. I’ve written about this before, but one of my favorite examples of such feelings is when a word is on the tip of the tongue. Perhaps it occurs when you run into an old acquaintance whose name you can’t remember, although you know that it begins with the letter J. Or perhaps you struggle to recall the title of a recent movie, even though you can describe the plot in perfect detail.

What’s interesting about this mental hiccup is that, even though the mind can’t remember the information, it’s convinced that it knows it. We have a vague feeling that, if we continue to search for the missing word, we’ll be able to find it. (This is a universal experience: The vast majority of languages, from Afrikaans to Hindi to Arabic, even rely on tongue metaphors to describe the tip-of-the-tongue moment.) But here’s the mystery: If we’ve forgotten a person’s name, then why are we so convinced that we remember it? What does it mean to know something without being able to access it?

This is where feelings of knowing prove essential. The feeling is a signal that we can find the answer, if only we keep on thinking about the question. And these feelings aren’t just relevant when we can’t remember someone’s name. Think, for instance, about the last time you raised your hand to speak in a group setting: Did you know exactly what you were going to say when you decided to open your mouth? Probably not. Instead, you had a funny hunch that you had something worthwhile to say, and so you began talking without knowing how the sentence would end. Likewise, those players on Jeopardy are able to

ring the buzzer before they can actually articulate the answer. All they have is a feeling, and that feeling is enough.

These feelings of knowing illustrate the power of our emotions. The first thing to note is that these feelings are often extremely accurate. The Columbia University psychologist Janet Metcalfe, for instance, has demonstrated that when it comes to trivia questions, our feelings of knowing predict our actual knowledge. Think, for a moment, about how impressive this is: the metacognitive brain is able to almost instantly make an assessment about all the facts, errata and detritus stuffed into the cortex. The end result is an epistemic intuition, which tells us whether or not we should press the buzzer. Watson won, at least in part, because it was a fraction of a second faster with its hunches. It didn’t know more. It just knew what it knew first.

I certainly don’t mean to take away from the achievements of those IBM engineers. Watson is an amazing machine. Nevertheless, I think the real lesson of the victorious Watson is that we have much to learn from the software and hardware running in our head. If we’re going to live in a world saturated with machines, then those machines had better learn from biology. As natural selection learned long ago, computational power without efficiency is an unsustainable strategy.

P.S. I really enjoyed Stephen Baker’s Final Jeopardy, if you’d like to learn more about the struggle to create Watson.

By Jonah Lehrer 

Light-Emitting Rubber Could Sense Structural Damage

Researchers at Princeton University have built a new type of sensor that could help engineers quickly assess the health of a building or bridge. The sensor is an organic laser, deposited on a sheet of rubber: when it's stretched—by the formation of a crack, for instance—the color of light it emits changes.

 Making waves: An atomic force microscopy image shows ripples on the surface of a rubbery material called PDMS. The ripples affect the color of light emitted when the material is stretched.

"The idea came from the notion that perhaps it's possible to cover large structures like bridges with a skin that you can use to detect deformation of the structure from a distance," says Sigurd Wagner, professor of electrical engineering at Princeton University, who developed the stretchable laser sensor with and Patrick Görrn, a researcher at Princeton. The work was published last month in Advanced Materials.

For more than a decade, researchers have explored ways to make dense arrays of sensors capable of covering large areas. Sensing skins are especially intriguing to civil engineers, who know the importance of detecting damage in infrastructure so that disasters like the 2007 collapse of a bridge in Minneapolis can be averted. "There's really a critical need to develop better sensors that can be applied to infrastructure systems," says Jerome Lynch, professor of civil and environmental engineering at the University of Michigan. 

Traditional strain sensors simply measure stress along a particular line. One such sensor is a wire that changes resistivity when it's under strain. Another type is an optical fiber that indicates strain when light injected at one end is scattered by a defect in the structure. "But the problem is if the damage occurs between the sensors—it's difficult to detect," says Branko Glisic, a professor of civil and environmental engineering at Princeton who was not directly involved with the project. A stretchable laser could solve this problem by covering more area than wires or fiber optics. To make the device, a sheet of stretchable material called polydimethylsiloxane (PDMS) was specially prepared so that it had a wavy surface. Next, the researchers spun a liquid mixture of organic molecules onto the wavy surface. When an ultraviolet laser is shone on the organic layer (a method of powering a laser called optical pumping), it stimulates the emission of photons from the organic molecules. Lasing occurs because the wavy surface acts as a diffraction grating, reflecting the light between the waves, effectively amplifying the signal.

The molecules normally emit visible red light, but when the rubber surface is stretched or compressed, it alters the color of light that is emitted. By stretching the rubber 2.2 percent of its length, the researchers could change the color of light. A light detector would notice a difference of about five nanometers between the starting and ending wavelengths of emitted light. This could correlate to tiny changes in strain within a structure, explains Wagner. "It's highly sensitive, and that's the advantage," he says. "In many cases, structural or civil engineers would like to see incipient failure, not a visible crack; and they'd like to have a sensor capable of that sensitive measurement."

Optically pumping the stretchable laser skin could be an advantage for the system. It could reduce the cost of installation, because it wouldn't require wires. It would also mean that an engineer could stand at a distance from a structure, shine ultraviolet light onto the surface of the sensing skin to detect tiny changes in strain.
The concept could "fill a critical niche in structural health," says Lynch. "The approach seems novel, and it's interesting what kind of results the technology could yield when deployed in the real world." Lynch is developing large-area sensing skins that rely on layers of carbon nanotubes and other organic molecules to sense strain, cracking, and corrosion, among other defects.

Wagner says that his prototype still needs to be fine-tuned. While the PDMS sheets can stretch a great distance, the organic layers sheer off when they're extended too far. Fixing this problem will likely come down to testing different types of light-emitting molecules and finding a way to better affix them to the PDMS. "We know the experiments to do," he says. "We just haven't found the magic recipe yet."

By Kate Greene
From Technology Review

Nerves Light Up to Warn Surgeons Away

Surgeons take pains to avoid injuring nerves in and around surgical sites—a stray cut could lead to muscle weakness, pain, numbness, or even paralysis. In delicate operations like prostate removal, for instance, accidentally damaging nerves can lead to incontinence or erectile dysfunction. Scientists at University of California San Diego have announced a new method for lighting up nerves in the body with fluorescent peptides, which could act as markers to keep surgeons away.

 Nerve endings: In mice, it is hard to see nerves that descend into a tumor under normal light (left, in blue). But when the mice are injected with a nerve-specific fluorescent probe, the buried branches are easier to see (middle). Combined with a tumor probe, the approach can highlight both the tumor to be removed (right, in green) and the nerves to be spared.

Quyen Nguyen, a surgeon at UCSD who led the research, says that during their training, surgeons learn where nerves are located and use that knowledge to avoid them. Most of the time, knowledge and experience are enough, but if anything is out of place or damaged for some reason, "finding the nerves can be challenging," Nguyen says. Fluorescence offers a way to let surgeons see them "even before they encounter them with their tools," she says.

Nguyen, working with chemist Roger Tsien, has previously created fluorescent markers that can illuminate the margins of tumors during surgery. To identify a specific tag for nerves, her team used a technique called phage display. A phage is a virus that infects bacteria, and it displays a small protein on its surface called a coat. Scientists can easily change the sequence of amino acids of this protein, creating millions of phages that display different coats. Using this technique, Nguyen's team looked for sequences of amino acids that preferentially stuck to nerve cells, and used that information to design a peptide that can serve as a nerve-specific tag. By adding a fluorescent probe to the peptide, they created a beacon that illuminates nerves under a particular wavelength of light.

The researchers injected their peptide into the bloodstream of mice, and found that all peripheral nerves (those outside the brain and spinal cord) were labeled within two hours. The effect lasted for several hours, and was completely gone after a day. The label worked even if nerves were damaged. The researchers also tested the peptide in human tissue samples and confirmed that it would also bind to human nerves.

These probes could be used in concert with probes for cancer, helping a surgeon to remove a tumor while avoiding the nerves around it. The technology has been licensed by Avelas Biosciences, a small biotech startup cofounded by Tsien; further animal testing will be needed before it's ready for clinical studies.

Hisataka Kobayashi, chief scientist in the Molecular Imaging Program at the National Cancer Institute, says that "there's no question this is a great technology" that has the potential to benefit surgeons. But he says the researchers will need to identify the exact molecular target of the probes and verify that they are completely nontoxic.

By Courtney Humphries
From Technology Review

Laser-Quick Data Transfer

For the first time, researchers have grown lasers from high-performance materials directly on silicon. Bringing together electrical and optical components on computer chips would speed data transfer within and between computers, but the incompatibility of the best laser materials with the silicon used to make today's chips has been a major hurdle.

 Laser on a chip: Nanolasers grown on silicon chips are tested on this table at the University of California, Berkeley. The chips are inside the chamber in the center of the picture.

By growing nanolasers made of so-called exotic semiconductors on silicon, researchers at the University of California, Berkeley, have surmounted this hurdle. With further development, the Berkeley lasers could provide ways to transfer more data more quickly, speeding up computing within supercomputers and making it faster to download large files.

"Getting data on and off your laptop is becoming a bottleneck," says Mario Paniccia, director of Intel's Photonics Technology Lab. It's difficult to push data through today's copper wiring at rates higher than 10 gigabits per second. This slows data transfer between components of a computer, such as the CPU and the memory, and imposes limitations on design. Designers must put components as close together as possible so that data doesn't have to travel too far, generating heat and slowing down the system.

Data encoded in light pulses can travel farther faster and with lower losses. But the only way to get optical components onto today's chips is to do it using materials and manufacturing methods that are compatible with the silicon systems used in today's fabs. "The future of photonics is based on silicon," says John Bowers, Kavli chair of nanotechnology at the University of California, Santa Barbara. The problem is that silicon itself is s a poor laser material, wasting a lot of energy and making little light.

The most efficient lasers are made out of a group of materials called "III-V" semiconductors, whose numerical name comes from the columns of the periodic table where the elements used to make them are found. Like silicon, these materials are crystalline. But the crystal lattices of silicon and of these materials do not line up with one another because the atoms are different sizes. When researchers grow III-V materials on top of silicon, the III-V crystal strains to align with the silicon crystal, leading to defects that degrade performance.  

Connie Chang-Hasnain, professor of electrical engineering and computer science at Berkeley, has overcome this incompatibility between silicon and laser materials by taking advantage of the properties of nanostructures and by carefully controlling the growth process. The Berkeley researchers start by placing a silicon substrate inside a chemical growth chamber, raising the temperature to 400 °C, and flowing in gases containing indium, gallium, and arsenide. By controlling the ratios of the gases and their flow rates, Chang-Hasnain has found, it's possible to direct the growth of these III-V materials so that it starts from a small point called a "seed." As an indium-gallium nanopillar sprouts up from the seed, it forms a defect-free crystal. The seed seems to protect the rest of the structure from the influence of the underlying silicon. The researchers then flow in a second set two gases to make a gallium-arsenide shell around the pillar.

When the nanopillar is pumped with light from another laser, the light spirals around inside the pillar, as if running up and down a spiral staircase. The difference in materials between the core and the shell encourages this effect, trapping the photons in this spiral until they reach a high enough energy threshold and are emitted. This spiraling effect is something that hasn't been seen in other types of lasers before. These results are described in the journal Nature Photonics. The next step is to demonstrate that the laser can be pumped with electrical energy, key to making a compact laser. Chang-Hasnain is confident the Berkeley researchers will make an electrically pumped laser. In another publication in Nano Letters her group demonstrated exotic semiconductor diodes on silicon, which they're now adapting to pump the nanolasers.

Another key to making lasers on silicon chips is not to let the temperature get too high. Chang-Hasnain says that her process could eventually be used to grow high-quality lasers on otherwise finished silicon chips patterned with transistors and optical components, giving them the capability of encoding data into pulses of light. Depositing high-quality III-V semiconductor crystals usually requires higher temperatures—instead of 400 °C, these materials are usually grown at 700 °C, a temperature that would destroy a microprocessor. Chang-Hasnain says it's the nanostructure of the lasers that makes this possible: high-quality nanostructures can generally be grown at lower temperatures than large films made from the same materials.

"A lot of progress has been made on silicon optical components," says Intel's Paniccia. However, progress on lasers that are compatible with silicon chips has lagged behind. Researchers have made various  optical components from silicon using materials and processes already present in chip-manufacturing lines. But they have to add on the lasers afterward. Intel, IBM, and other companies have been developing such workarounds. 

Chang-Hasnain acknowledges that the group has many more things to prove, from electrical pumping of the lasers to proving they provide enough light of the right wavelengths and can couple it to other optical components. But Intel's Paniccia says the demonstration that these laser materials can be made compatible with silicon is "a big step."

By Katherine Bourzac
From Technology Review

Nanonets Give Rust a Boost as Agent in Water Splitting's Hydrogen Harvest

Assistant Professor of Chemistry Dunwei Wang and his clean energy lab pioneered the development of Nanonets in 2008 and have since shown them to be a viable new platform for a number of energy applications by virtue of the increased surface area and improved conductivity of the nano-scale netting made from titanium disilicide, a readily available semiconductor.

 Boston College researchers have tested their Nanonet design as a platform for clean energy applications. Most recently, coating the highly conductive titanium disilicide core (a) with hematite, the mineral form of iron oxide, or rust, dramatically improved the performance of the material at its fundamental state. Transmission electron microscopy image (b) shows the structural complexity of the Nanonet and additional images (c) detail the hematite Nanonet spacing, as well as the electron diffraction pattern of hematite (lower right corner).

Wang and his team report that coating the Nanonets with hematite, the plentiful mineral form of iron oxide, showed the mineral could absorb light efficiently and without the added expense of enhancing the material with an oxygen evolving catalyst.

The results flow directly from the introduction of the Nanonet platform, Wang said. While constructed of wires 1/400th the size of a human hair, Nanonets are highly conductive and offer significant surface area. They serve dual roles as a structural support and an efficient charge collector, allowing for maximum photon-to-charge conversion, Wang said.

"Recent research has shown that the use of a catalyst can boost the performance of hematite," said Wang. "What we have shown is the potential performance of hematite at its fundamental level, without a catalyst. By using this unique Nanonet structure, we have shed new light on the fundamental performance capabilities of hematite in water splitting."

On its own, hematite faces natural limits in its ability to transport a charge. A photon can be absorbed, but has no place to go. By giving it structure and added conductivity, the charge transport abilities of hematite increase, said Wang. Water splitting, a chemical reaction that separates water into oxygen and hydrogen gas, can be initiated by passing an electric current through water. But that process is expensive, so gains in efficiency and conductivity are required to make large-scale water splitting an economically viable source for clean energy, Wang said.

"The result highlights the importance of charge transport in semiconductor-based water splitting, particularly for materials whose performance is limited by poor charge diffusion," the researchers report in the journal. "Our design introduces material components to provide a dedicated charge transport pathway, alleviates the reliance on the materials' intrinsic properties, and therefore has the potential to greatly broaden where and how various existing materials can be used in energy-related applications."


Male Squids ‘Kick Ass’ at Touch of Female Pheromone

Just a touch of a female pheromone can send male longfin squids into a frenzied rage, potentially giving wimpy squid males a chance to fight for the ladies.

Whether there exists a human analogue to the pheromone, called Loligo beta-microseminoprotein, is a matter of premature speculation. But the findings do reveal a potentially fascinating subject for further research.
“It’s like Popeye’s spinach. When they touch it, they say ‘let’s go’ and start to kick ass,” said biologist Roger Hanlon of the Marine Biological Laboratory, who reported the findings Feb. 10 in Current Biology. “It’s a beautiful, robust response. It may be a mechanism for smaller males who have trouble being dominant to mate with females.”

Pheromones are compounds produced by animals that trigger behaviors and physiological cycles in their brethren, including aggression, alarm, ovulation and even sex. As their mating season peaks in the spring, female longfin squids secrete Loligo beta-microseminoprotein onto their capsules of eggs.

During a 1997 dive to investigate the longfin squid’s mating behavior, Hanlon and others placed one female egg capsule in a school of about 1,000 squids. All of a sudden, the males — which typically show no signs of aggression — went crazy.

“Males are visually attracted to egg capsules. So one bold male squid wiggled his arms in there and immediately started fighting with other males,” Hanlon said. “Another came down and started fighting, then another, then another.” Within five minutes, the entire school had spawned.

Hanlon and his team went on to search for compounds that could trigger the behavior. They brought wild squids into the lab, then isolated compounds secreted by female genitals and in eggs. Males were exposed to each compound until the researchers found one that drove them nuts: Loligo beta-microseminoprotein (video below).

The pheromone resembles a class of poorly understood compounds secreted in reproductive cells and fluids across the animal kingdom, including mammals. However, Hanlon cautioned against thinking it would have an effect on people.

“It’s easy to take a molecule that turns on aggression in squid out of context, though. The NFL and Army shouldn’t be calling for this stuff. It’s so far removed from that,” he said. “There is no evidence at all that it would cause aggression in any vertebrate animal, let alone humans.”

Neurobiologist Edward Kravitz of Harvard University, who studies aggression in flies, called the finding compelling and is curious to see where it leads.

“People think these may be signaling molecules. It’s possible this is a molecule used throughout evolutionary history for a similar purpose,” Kravitz said.

Many other questions remain about the pheromone’s function in squids.
“We don’t know how it gets into the suckers and the blood stream, what receptors it affects, and how it influences the nervous system,” Hanlon said. “This is really just the beginning, and we hope to inspire other folks to start looking closely at how this class of proteins function.”

By Dave Mosher 

The Smallest Computing Systems Yet

A team led by Charles Lieber, a professor of chemistry at Harvard, and Shamik Das, lead engineer in MITRE's nanosystems group, has designed and built a reprogrammable circuit out of nanowire transistors. Several tiles wired together would make the first scalable nanowire computer, says Lieber. Such a device could run inside microscopic, implantable biosensors, and ultra-low-power environmental or structural sensors, say the researchers. 

 Working wires: A scanning electron microscope image (top) shows a programmable nanowire circuit. This false-colored scanning electron microscope image (bottom) shows a nanowire processor tile superimposed on top of the architecture used to design the circuit.

For more than a decade, nanowires and nanotubes have promised to shrink computing to scales impossible to achieve with traditional semiconductor materials. But there have been doubts about the practicality of nanowires and nanotubes as actual computing systems. "There had been little progress in terms of increasing the complexity of circuits," says Lieber.

One big problem has been reproducing structures made from nanowires and nanotubes reliably. Each structure needs to be virtually identical to ensure that a circuit operates as intended. But now, says Lieber, some of those problems are being solved. His group, in particular, has developed ways to produce identical nanowires in bulk. Because of this, he and colleagues at MITRE have been able to design a nanowire circuit architecture that has the potential to scale up. The details are published in the current issue of Nature.

Traditional chips are made using a so-called top-down approach in which a design is essentially exposed like a photograph onto a semiconductor wafer, and excess material is etched away. In contrast, a bottom-up approach is used to make the nanowire circuits. This means they can be deposited on various types of surfaces, and can be made more compact. "You want [sensor] systems that are physically small," says James Klemic, nanotechnology laboratory director at MITRE. "Right now, your only option is to use a chip that dwarfs the sensor."  To make the new nanowire circuit, researchers deposited lines of nanowires, made of a germanium core and silicon shell, on a substrate and crossed them with lines of metal electrodes to create a grid. The points where the nanowires and electrodes intersect act as a transistor that can be turned on and off independently. The researchers made a single tile, with an area of 960 square microns containing 496 functional transistors. It is designed to wire to other tiles so that the transistors, in aggregate, could act as complex logic gates for processing or memory.

The nanowire transistors maintain their state-on or off—regardless of whether the power is on. This gives it an instant-on capability, important for low-power sensors that might need to collect data only sporadically and also need to conserve power.

According to Das, the circuits could also be 10 times more power-efficient than circuits made of traditional materials. One reason is the nanowire's electrical properties, which don't allow electric current to leak, unlike traditional transistors. Another reason is that the circuit design uses capacitive connections instead of resistive ones, which are less efficient. "We don't burn a lot of power driving resistors," says Das.

"This is a significant milestone on several fronts," says André DeHon, professor of electrical and system engineering at the University of Pennsylvania. Reprogrammable transistors made of nanowires are "the building block I was hoping for," he says.

The researchers' work represents "a leap forward in complexity and function of circuits built from the bottom up," says Zhong Lin Wang, professor of materials science and engineering at Georgia Institute of Technology. It shows that the bottom-up method for manufacturing "can yield nanoprocessors and other integrated systems of the future," he says.

More work needs to be done to make nanowire processors practical for use in electronics systems, Lieber says. His group needs to demonstrate thousands of transistors on a tile—many times more than the current 496 transistors his group has so far achieved. In addition, they need to scale up to multiple tiles. The researchers are in the process of finding the best way to link a 16-tile system together. Lieber says that, realistically, manufacturing these circuits is still several years down the road.

By Kate Greene
From Technology Review

Ultrathin Material Shows Electronic Promise

Molybdenite, a mineral that's currently used as a lubricant, turns out to have extraordinary electronic properties when deposited in two-dimensional strips. Researchers in Switzerland have now made high-performance transistors out of this form of molybdenite. Used in this way, the mineral could hold promise for more efficient flexible solar cells, electronics, or high-performance digital microprocessors.

 Electric mineral: Molybdenite (bottom) was separated into atom-thick sheets and used to make experimental digital transistors (top).

Like graphene, an atom-thick form of carbon, "two-dimensional" molybdenite has electrical and optical properties that are much better than those found in three-dimensional forms of the material.

Researchers led by Andras Kis at the École Polytechnique Fédérale de Lausanne (EPFL) made molybdenite transistors using methods used in the early days of graphene research. Molybdenite, a relatively inexpensive mineral of molybdenum disulfide, has a layered structure similar to that of raw graphite. Kis's group crushed crystals of molybdenite between folded pieces of tape, peeling back layer after layer until all that remained were three-atom-thick sheets. They then deposited the molybdenite sheets onto a substrate, added a layer of insulating material, and used standard lithography to add source and drain electrodes and a gate to make a transistor. Other researchers had done this before but didn't get good performance. Kis says the molybdenite transistors have a comparable electrical mobility to similar ones made from graphene nanoribbons.

After Andre Geim and Kostya Novoselov demonstrated the promise of graphene in 2004—a feat that won them the Nobel Prize in 2010—there was a burst of interest in making and testing other two-dimensional materials. But graphene was considered more promising than anything else, and other materials came to be seen as curiosities, says James Hone, professor of mechanical engineering at Columbia University. Hone was part of a group that demonstrated that graphene is the strongest material ever tested. Hone, who is not affiliated with the EPFL researchers, expects their results to generate new interest in other two-dimensional materials, and molybdenite in particular. "This is a very promising result that will make us look at this material more carefully and see how we can squeeze better performance out of it," he says. 

Importantly, molybdenite is a semiconductor, which means it provides discrete energy levels for electrons to jump through—a property known as its bandgap. This is key for any material used in a digital transistor. Graphene does not have a bandgap, and to give it one, researchers must layer it or cut it into ribbons, which is complex and can lead to the degradation of graphene's other properties. "You have to work very hard to open up a bandgap in graphene," says James Tour, professor of chemistry and computer science at Rice University.

Graphene was originally seen as a material that could replace silicon in digital logic circuits, the type at the heart of today's microprocessors. But because it's so hard to make it into a semiconductor, it's becoming clear that graphene's promise lies elsewhere, for example in superfast analog circuits, the type used for telecommunications and radar, says Phaedon Avouris, who leads the IBM group developing graphene electronics. Molybdenite's bandgap is particularly promising for solar cells, LEDs, and other electro-optical devices.

But this is not enough for a material to show promise for digital logic, cautions Avouris. Molybdenite's properties must be further demonstrated before people in the electronics industry can get excited about it, he says. Researchers will also have to show, for example, that molybdenite has the properties necessary to significantly amplify electrical signals. "It's too early to say how promising this is," Avouris cautions.

Even before molybdenite's promise for high-performance microprocessors is proven—or isn't—researchers expect to find other uses for it. "You can buy metric tons of this stuff," says Hone. He points to work on making liquid suspensions of molybdenite sheets, which might be practical for making flexible solar cells and other electronics—the manual peeling method Kis used isn't practical for making large volumes of devices. "Typically, flexible electronics use polymers, but molybdenite would be more stable," he predicts.

Kis hopes that his results will encourage chemists to work on the problem of producing molybdenite sheets, as Geim and Novoselov's work encouraged people to work on methods for making large amounts of graphene. "To be promising for industry, you need to have some large-scale synthesis method for making a material," says Avouris. "It's the same problem there was in the beginning with graphene."

Tour says that Kis's results will indeed encourage chemical engineers to jump in and work with molybdenite. He says that the first experiments with graphene did not fully demonstrate its promise—researchers didn't know how to work with it. After years of working with graphene, chemists should be better able to work with molybdenite. "You already have a sense of how to handle it. This will be greatly benefited by the work we've been doing with graphene," Tour says. 

By Katherine Bourzac
From Technology Review

Tuberculosis Drug Dosing Gets a Closer Look

Even though antibiotics have been available for more than 50 years, tuberculosis remains a major killer: In 2009, 1.7 million people, largely from poor countries, died from the disease, according to a recent World Health Organization (WHO) report. Tawanda Gumbo, a physician and researcher at the University of Texas Southwestern Medical School, aims to change that by better tailoring courses of antibiotics to individual patients.

 Treating TB: Personalizing doses of antibiotics against the bacteria that causes tuberculosis (shown here) might reduce treatment times and reduce emergence of drug-resistant strains of the microbe.

Gumbo has spent the last 10 years studying the effects of common tuberculosis drugs in test tubes and in animals in an effort to find the most effective doses. He has also employed mathematical simulations—a technique borrowed from engineering—to figure out how different variables, such as weight, gender, and genetic variations in both the microbe and the patient, change the optimal dose. "If you give the same dose to 100 children, you get 100 different pictures," says Gumbo. "Given all of this variability, how should I dose different children?"

Gumbo is now ready to test his theories with the launch of a clinical trial of TB-infected children in South Africa. He hopes that tailoring doses to the individual will help shorten treatment regimens, which typically last six months, and slow the development of drug-resistant strains of the bacterium. The emergence of multidrug resistant cases—a form of the disease that is difficult and costly to treat because it does not respond to first-line drugs—is at its highest rate ever, according to WHO.

In the clinical trial, Gumbo and his collaborators will examine genetic variability of the microbe infecting a particular patient by assessing its genome as well as how susceptible it is to particular drug. Patients will get genotyping tests to determine whether they have specific mutations that influence how well they metabolize specific drugs. 

Public health agencies largely blame the development of drug resistance to TB on patients not finishing their antibiotic regimens.  But Gumbo thinks chronic underdosing could be a major contributor as well. "If you go to places with good treatment programs, where they administer and monitor drugs [according to guidelines], you still see high failure rates," says Gumbo.  "If they are doing everything they are supposed to do, why is there still drug resistance?"

According to his research, "the doses needed have almost always been much higher than what we've been giving currently," says Gumbo. "And we know from what we have done in the lab that if you achieve these higher concentrations, you can probably treat for shorter periods of time." 

The idea is still somewhat controversial, but "he is slowly winning converts in the TB community about the importance of proper dose selection," says Paul Ambrose, director of the Institute for Clinical Pharmacodynamics at the Ordway Research Institute, and a former collaborator of Gumbo's. 

Ambrose says that similar types of studies have been done for other drugs, but not for those used to treat TB, probably because they are so old. No new treatments have emerged for TB since the 1970s, though some new compounds are now being tested. 

In the past, this type of individualized drug monitoring for TB has been done "only in patients who were failing therapy, and it has been largely limited to resource-rich settings like the U.S. and Europe," says Eric Nuermberger, a physician at Johns Hopkins. Nuermberger is not involved in the study.

While pharmacogenomics—targeting drug dosages and selections of drugs to an individual's genetic makeup—is only part of Gumbo's strategy, experts say it will likely prove to be an important aspect of TB treatment. "We know that for isoniazid [a common TB drug], the global population as a whole splits into three different phenotypes; those who metabolize the drug quickly, those who metabolize it slowly, and intermediate metabolizers," says Nuermberger. A dose that is appropriate for a slow metabolizer might be too low for a fast metabolizer; likewise, a higher dose appropriate for a fast metabolizer might increase side effects for a slow metabolizer.  

"The other cornerstone first-line drug for TB, rifampicin, has been notorious for having highly variable drug exposures," adds Nuermberger. "If you look at 100 people who take it, you'll see a tenfold difference in exposure," the concentration and length of time that drug is in a person's bloodstream before being metabolized, he says. "We are now starting to understand there are genetic differences that determine drug exposure. And there is hope that we can develop genetic tests to use in the field." 

While it may be difficult to imagine implementing genetic testing in a poor country with an already strained medical system, Nuermberger doesn't rule it out. "We will continually develop cheaper and easier tests that could be implemented at the point of care," he says. "We thought it would be difficult to implement drug-resistance tests, but those are now being used in reference labs, and a new device is moving even closer to the point of care."

By Emily Singer
From Technology Review

The Key to Better Solar Cells: Bumpy Mirrors

Dye-sensitized thin-film solar cells are cheaper to make than conventional silicon cells, but they're still relatively inefficient. 

Nanodomes: An array of quartz domes 600 nanometers wide and 200 nanometers high (top) is pressed into a thin titanium dioxide film to imprint holes in the film (bottom). Filling the holes with silver helps to trap more light inside dye-based solar cells. 

Now researchers at Stanford University have used a specially designed metal reflector to boost the efficiency of solid electrolyte dye-sensitized solar cells by as much as 20 percent. The reflector is a thin silver film with an array of nanoscale bumps. The researchers use the film to coat the cells' back surface; the film helps trap more light inside the cells. "We get about 5 to 20 percent more absorption depending on the dye," says Michael McGehee, director of the Center for Advanced Molecular Photovoltaics at Stanford. McGehee led the research, which was  published online this week in the journal Advanced Energy Materials

Dye-sensitized thin-film cells with a light-to-electricity conversion efficiency of around 11 percent recently made their commercial debut. However, they use liquid electrolytes that are volatile and could leak. Cells with solid electrolytes have only shown efficiencies of about 5 percent.

"They took the best solid-state dye cell they could, and made it better," says David Ginger, a chemistry professor at the University of Washington, of the Stanford researchers. "Even better, they did it using technology and methods that could potentially be used in a production environment."

Dye-based solar cells are composed of semiconductor nanocrystals (typically titanium dioxide, or titania) that are coated with dye molecules and sandwiched—along with an electrolyte—between glass or plastic sheets. The dye absorbs light and creates electrons and positively charged holes. The crystals transfer the electrons to one electrode to produce an electrical current, while the electrolyte carries the holes to the other electrode.

Solid electrolytes are not as efficient as liquid ones, though, and the electrons and holes recombine more easily. To prevent that, the titania layer is very thin—typically two micrometers. But the thinner the cells, the more quickly light passes through them without getting absorbed. Research efforts to improve the efficiency of these cells have typically focused on developing stronger dyes and new types of nanocrystals. But McGehee and his colleagues used plasmonic reflectors to improve their cell's efficiency.

Plasmons are the oscillations of electrons at a metal surface when they are excited by light. By controlling the shape of the surface, you can control the type of plasmons created, which in turn influences how light interacts with the material. 

The reflector made at Stanford has bumps that create plasmons, which turn some of the incoming light rays by 90 degrees. So instead of bouncing off the silver and going back out of the cell, more light scatters back and forth inside the cell, giving the dye a longer time to absorb it.

The researchers made their devices by coating glass with a transparent conductive electrode on which they deposited a layer of titania nanoparticles. Then they took a quartz piece covered with 600-nanometer-wide domes and pressed it into the titania, effectively embossing it with tiny holes. Finally, they added layers of dye and silver.

"This is the first time that plasmonic structures have been applied to solid-state dye-sensitized solar cells, with a substantial increase in cell efficiency being reported," says Kylie Catchpole, a research fellow at the Australian National University. Catchpole is using light-trapping plasmonics to increase the efficiencies of other types of thin-film solar cells. 

A lot of work still needs to be done before the technology makes it to market, says Martin Green, who works on light-trapping photovoltaics at the University of New South Wales. Green says that dye-sensitized cells have "attracted enormous interest from the academic community, but they have made [little] commercial impact due to low efficiencies and doubtful durability," compared to commercial cells. Liquid electrolyte cells have forayed into the market, but Green is skeptical about their prospects as well. 

McGehee, though, is confident that high-enough efficiencies will be possible. The researchers are now looking at creating reflectors with bumps of different sizes, heights, spacing, and patterns. By tweaking these factors, they should be able to increase the amount of light that the cells absorb. They could also explore different dyes. "There definitely seems to be a clear pathway to taking efficiencies up over 20 percent," he says.

By Prachi Patel
From Technology Review