Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........


Time Travel Through the Brain


Over the 100-year history of modern neuroscience, the way we think about the brain has evolved with the sophistication of the techniques available to study it. Improvements in microscope design and manufacture, together with the development of cell-staining techniques, afforded neuroscientists their first glimpse at the specialized cells that make up the nervous system. Microscopes with more magnifying power enabled them to probe nerve cells in greater detail, revealing distinct compartments. Newer techniques expose the connections between nerve cells, revealing the complex organization of the brain.

Visualizing Neurons
Nineteenth-century histologists created some of the first images of nerve cells by chemically stiffening tissue and then immersing it in silver nitrate, randomly staining a small number of cells to make them visible when they were viewed with powerful new light microscopes. The technique revealed the silhouette of the cell body and its network of extensions, and it enabled the great neuroanatomist Santiago Ramón y Cajal to prove that the nervous system consists of cells. He produced the 1899 drawing at left: it shows finely branched Purkinje cells, large neurons in the cerebellum that play an important role in controlling movement.


Photo Credit: Herederos de Santiago Ramón y Cajal




Fluorescent dyes can now be injected directly into cells to stain the ones a researcher wants to view. This image shows a Purkinje cell in red and a nerve fiber from another cell in green. A single Purkinje cell is connected to hundreds of thousands of these fibers


Photo Credit: Michael Häusser, University College London


Beaming Electrons
Developed in the 1930s, electron microscopes illuminate tissue samples with beams of electrons rather than light, increasing the maximum resolution so that much smaller structures can be distinguished. The image above, of a part of the brain stem that processes auditory information, shows a cluster of nerve-cell connections, magnified 23,900 times. The small, faint circles are synaptic vesicles, which ferry chemical signals between cells.


Photo Credit: Palay, 1956. Originally published in the Journal of Biophysical and Biochemical Cytology, 2: 193-202



A newer twist on electron microscopy, developed in the 1980s, can reveal the internal structures of nerve cells. Researchers use a detergent to remove the cell membrane. Platinum and carbon are deposited onto the exposed surfaces to reproduce the cell’s interior features as a three-dimensional mold, which is then examined in the microscope. This image shows a hippocampal neuron that has been stripped of its membrane to expose the cytoskeleton, a scaffold that regulates the cell’s growth and movement.


Photo Credit: Bernd Knöll (University of Tübingen), Jürgen Berger, and Heinz Schwarz (Max-Planck-Institute for Developmental Biology)



Glowing Cells
In the mid-1990s, researchers began marking specific cells in lab animals by genetically engineering the organisms to incorporate fluorescent proteins (above) found in marine species. Within 10 years, these proteins had been engineered into the cells in more complex ways, enabling researchers to monitor biochemical reactions and track the movements of cellular proteins in real time.


Photo Credit: Koki Moriyoshi et al., Neuron, February 1996



Scientists can now label nerve cells in a rainbow of colors. This image is of a “Brainbow” mouse, which has been engineered so that different nerve cells glow in dozens of hues; it shows the hippocampus, a brain area that is crucial for memory. This technology, developed in 2007, has revealed the connections between cells in remarkable detail.


Photo Credit: Jean Livet, INSERM




The Third Dimension
Confocal laser microscopy uses focused laser beams to scan tissue. The focused beam reduces the scattered light signal used in conventional microscopes, producing sharper, more detailed images. Light reflected back directly from each point is used to construct a three-dimensional image. This pyramidal neuron from the cortex of a mouse (above) was visualized by scanning the tissue at different depths and superimposing the series of images.


Photo Credit: Tony Pham, Baylor College of Medicine


In the 1990s, scientists developed a way to further reduce the scatter of light, called two-photon microscopy. This approach, which uses infrared light, can probe deeper into live tissue, producing images like the section of mouse cerebellum shown above.


Photo Credit: Alanna Watt and Michael Häusser, UCL



Tracing Fibers
In the 1980s, scientists developed fluorescent dyes to help them examine the long, thin extensions of neurons that carry information between these cells. Injected directly into the brain, the dye is incorporated into the cell membrane and transported along it, revealing the route of the nerve fiber. This image highlights the long-range connections between sensory areas of a mouse’s cerebral cortex and thalamus, often called the brain’s relay station. Fibers from the primary visual cortex are shown in red, while fibers from the primary somatosensory cortex, which processes bodily sensations, are shown in green.


Photo Credit: Maria Carmen Piñon and Zoltán Molnár/University of Oxford


Today, scientists can safely examine these connections in a living human brain using a variation of magnetic resonance imaging (MRI) called diffusion tensor imaging. This technique, developed in the 1990s, infers the location of nerve fibers by tracking water molecules in the brain as they move along them. The image above shows fibers radiating from the thalamus in a human brain.


Photo Credit: Thomas Schultz/University of Chicago


By Moheb Costandi







Logging on with Hardware

Valuable information is increasingly stored remotely, but it's difficult to keep it safe without compromising convenience and accessibility for users. Last week, Uniloc, a company based in Irvine, CA, launched a product called EdgeID that promises to strengthen remote authentication by using consumers' devices as keys.

Companies selling cloud services, and businesses offering remote access to employees, are becoming increasingly concerned about the security of remote access.

"Everything can go into the cloud, [but] the identity of the user connecting to the system has to stay at the edge," says Paul Miller, Uniloc's chief marketing officer. In other words, a user always has to access data by some physical means. Miller believes that making devices an integral part of authentication will help companies define harder boundaries around their networks.

EdgeID is part of a recent crop of authentication products that rely on automatically detecting additional information about users. Users seem to like the idea--a recent survey by the Ponemon Institute, a Michigan-based security research company, found that 70 percent of respondents would be willing to let online merchants use information about their computer hardware as part of the authentication system for an online purchase. And about 75 percent said they would prefer device authentication over passwords. However, some experts still question whether such schemes truly improve upon passwords, and whether they might be too inconvenient to catch on.

To use EdgeID, users must first register a device, such as a laptop or smartphone, by installing a small software program. The program collects about 100 pieces of information about the device, ranging from basic facts like the hard disk serial number to details that evolve through wear on the system, such as the locations of bad sectors on the hard drive. These details are then transferred to a central server, which also runs software from EdgeID.

When the user logs on via the registered device, the server communicates with the installed EdgeID software, asking it questions about the information that was collected, such as a particular digit in a serial number. The software keeps up a running conversation, making the system answer questions regularly to stay connected. However, because some information about the device will change with additional wear, the server tolerates some amount of error.

Uniloc leaves it up to a company running EdgeID to determine how to react to unregistered devices. A Web service may decide to lock out such devices completely, limit the actions that can be taken with them, or simply observe the change. Miller notes that users will be able to register new machines or report a machine lost or stolen.

Other products perform device authentication without installing software. For example, Threatmetrix, based in Los Altos, CA, gives customers code that can be embedded into Web pages. When a computer loads a tagged page, the system uses Flash and JavaScript to put together information about the user's browser, operating system, and network characteristics. Threatmetrix can then check for irregularities, such as whether the machine is known to participate in a botnet, whether it's accessing the system via multiple accounts, or whether it's taking steps to obfuscate its location in the network.

But Rick Smith, an information-security consultant and expert on authentication, says it's not clear how much device-authentication schemes add to overall security. The main problem, he says, is that device authentication could be hamstrung by efforts to accommodate traveling users, or others who might legitimately use unregistered devices. "The solutions exist," Smith says. "The problem is, in every case, you have to add more mechanisms to make it work."

A specific problem with third-party systems that seek to identify devices, he adds, is that the underlying operating system and hardware on those devices provide ways for attackers to fool the system. Smith notes that some authentication systems have been designed with cryptographic authentication modules in the operating system, or in hardware. He thinks that this approach would provide stronger security, though it might still pose problems for traveling users.

Others see device authentication as a good supplement to passwords. Larry Ponemon, chairman and founder of the Ponemon Institute, says that he expects device authentication to go through "a natural process of adoption, testing, modification, and refinement," but that it "holds a great deal of promise to address an area of real concern to the consumer."

By Erica Naone

Tiny Devices Use Light to Grab Cells

Tiny optical devices that can grab small particles out of a liquid, using the force of photons, could make it possible to image and identify disease cells on a chip without the need for microscopes. The new types of optical traps, developed by physicists at Harvard University, are designed to be integrated with microfluidic devices, some of which are currently in clinical trials for diagnosing cancer and monitoring patient response to therapies. The Harvard researchers have shown that their optical traps can do on a chip what conventionally requires a large microscope and a powerful laser.

Light force: A silicon chip coated with a gold film (center), when illuminated by laser light shining through a prism, can pull particles out of a liquid solution flowing over the top.



Optical traps, a technology developed in the 1980s, usually cost tens of thousands of dollars and require powerful lasers and microscopes to focus the light onto particles as small as single atoms. Photons have no mass, but they do have momentum, and transferring this momentum to an atom, a molecule, or a cell enables physicists to control the particle's movement, holding it absolutely still for observation, or pulling on it to monitor its response. Since their invention, optical traps have been used to make many basic science advances. But the Harvard group, led by associate professor of electrical engineering Kenneth Crozier, hopes to use optical traps in diagnostic devices, making them cheap and small enough to be practical in medicine.

The optical traps developed by Crozier with Harvard researchers Ethan Schonbrun and Kai Wang can trap particles just as strongly as more complex systems. Crozier says that the compact traps could be integrated into microfluidics and used to sort and image disease cells in the blood, for example. Microfluidic chips shuttle cells around in a fluid and typically control their movements using physical barriers and variations in pressure and voltage. Crozier's optical traps could gently pull cells down to the surface of a chip for observation and then be used to sort the cells based on their identity. The group presented their advances at the annual conference of the Optical Society of America this Using manufacturing techniques common to the semiconducting industry, the Harvard researchers patterned chips with two different designs. One is a silicon chip patterned with a ring with a radius of five micrometers. When illuminated by a laser, light resonates around the ring, generating an optical force that can pull particles from liquid flowing above the chip. Another is a chip patterned with arrays of 64 bullseye patterns. Each of these can, when illuminated, trap a flowing particle. What's more, these patterns focus light in a way that's very similar to a microscope. "Each has the function of a confocal microscope and could be used to get a 3-D picture of a cell," says Crozier.

"If you want to do cell sorting, silicon optics is a good path," says Tom Perkins, a physicist at the National Institute of Standards and Technology in Boulder, CO. The advantage of silicon systems over conventional optical traps, Perkins says, is compatibility both with microfluidics and with the manufacturing methods already in place for making computer chips.

Focusing power: This chip, mounted with paperclips on a microscope objective for observation, is patterned with gold films 500 nanometers wide. When light shines on the gold lines through a prism underneath the chip, it forms surface energy waves that can trap particles and push them along.



A third design of Crozier's is based on gold structures that can generate a form of light energy called plasmons. When a smooth gold film is illuminated, the light couples to the surface in the form of surface waves called plasmons; the forces generated by these waves are very localized and very strong. Crozier has demonstrated that long, tapered gold films patterned on silicon chips can, when illuminated by light shining through a small prism, be used to pull a particle down and then push it along the gold surface. By changing the angle of the light, it's possible to control a particle's speed. This type of structure will be particularly useful for cell sorting, Crozier says.

These types of systems might eventually replace clinical-laboratory devices called flow cytometers, says Holger Schmidt, professor of electrical engineering and director of the Keck Center for Nanoscale Optofluidics at the University of California, Santa Cruz. Today's flow cytometers use bulky optical systems to separate cells in, say, a blood sample based on their size and shape. Chip-scale optics could do the same thing but would cost much less and might be portable, allowing them to be brought to a patient's bedside. Schmidt, who's developed compact, sensitive optical systems for trapping cell organelles and detecting single virus particles, says these compact optical traps might be on the market in as few as three to five years.

By Katherine Bourzac

Faster Maintenance with Augmented Reality

In the not-too-distant future, it might be possible to slip on a pair of augmented-reality (AR) goggles instead of fumbling with a manual while trying to repair a car engine. Instructions overlaid on the real world would show how to complete a task by identifying, for example, exactly where the ignition coil was, and how to wire it up correctly.
Faster fix: A U.S. Marine technician wears an augmented-reality headset as he carries out a maintenance task inside an armored vehicle.



A new AR system developed at Columbia University starts to do just this, and testing performed by Marine mechanics suggests that it can help users find and begin a maintenance task in almost half the usual time.

AR has long shown potential for both entertainment and practical applications, and the first commercial applications are starting to appear in smartphones, thanks to cheaper, more compact computer chips, cameras, and other sensors. So far, however, these apps have been mainly limited to providing directions. But researchers are also working on many practical applications, including ways to help with specific repair and maintenance tasks.

The Columbia researchers worked with mechanics from the U.S. Marine Corps to measure the benefits of using an AR headset when performing repairs to a light armored vehicle. Currently, Marine mechanics have to refer to a technical manual on a laptop while performing maintenance or repairs inside the vehicle, which has many electric, hydraulic, and mechanical components in a tight space.

A user wears a head-worn display, and the AR system provides assistance by showing 3-D arrows that point to a relevant component, text instructions, floating labels and warnings, and animated, 3-D models of the appropriate tools. An Android-powered G1 smartphone attached to the mechanic's wrist provides touchscreen controls for cueing up the next sequence of instructions.

The idea was to present a user with the "information they need to find and fix problems in a way that is going to be more efficient and accurate," says Steven Feiner, a professor of computer science and director of the Computer Graphics and User Interfaces Laboratory at Columbia, who carried out the research with Steven Henderson, an assistant professor at the United States Military Academy's Department of Systems Engineering. Henderson and Feiner presented their paper at the International Symposium on Mixed and Augmented Reality (ISMAR 09) in Orlando, FL, last Thursday, where it won the conference's Best Paper award.

The work "provides more insights into what AR can contribute in the repair and maintenance domain, and in what specific situations AR interfaces can be helpful and advantageous," says Tobias Höllerer, cochair of ISMAR 09 and associate professor at the University of California, Santa Barbara.

Henderson and Feiner first gathered laser scans and photography of the inside of the vehicle. They built a 3-D model of the vehicle's cockpit and developed software for directing and instructing users in performing individual maintenance tasks. Ten cameras inside the cockpit were used to track the position of three infrared LEDs attached to the user's head-worn display. In the future, the team suggests that it may be more practical for cameras or sensors to be worn by the users themselves.

Six participants carried out 18 tasks using the AR system. For comparison, the same participants also used an untracked headset (showing static text instructions and views without arrows or direction to components) and a stationary computer screen with the same graphics and models used in the headset. The mechanics using the AR system located and started repair tasks 56 percent faster, on average, than when wearing the untracked headset, and 47 percent faster than when using just a stationary computer screen.

"From a research point of view, [this work] is the best comparison yet of the different approaches you can take between a normal multimedia system, a wearable one, and a fully augmented-reality one," says Georgia Institute of Technology professor Blair MacIntyre, who has worked with Feiner in the past but was not involved in this project.

Next, the team wants to expand the AR system so that it tells users how to perform a task better and faster. "We believe that by paying attention to the actual task itself, and giving advice about how to do it, we could get similar types of improvements using AR," says Feiner. "That is something we want very much to explore." In terms of practical AR systems for widespread use, Feiner says that having displays that aren't too cumbersome or bulky will be important.

Even though the Columbia AR system was designed to help trained personnel repair a particular vehicle, similar technology could have a broader impact, says MacIntyre. Such a system could help regular car mechanics and eventually ordinary drivers. "If you're going to build an elaborate system with all of the information about the engine, you can then build a stripped-down version [that] makes building those end-user systems more feasible," he says. MacIntyre adds that a smartphone app showing how to change an engine's oil probably isn't far off but that headset technology will probably take longer to arrive.

By Kristina Grifantini

Next Stop: Ultracapacitor Buses

Municipal transit agencies have tried to reduce the carbon footprint of their bus fleets using a range of options over the years, from biofuels and hydrogen to batteries and hybrid-electric diesel. Now a Chinese company and its U.S. partner say that ultracapacitors could offer the greenest and most economical way of powering inner-city buses.
Fast charger: A bus that runs entirely on ultracapacitors charges up at a bus stop in Shanghai. The buses can only travel three to five miles between charges, but the ultracapacitors allow for fast recharging at designated bus stops.




There's just one catch: the best ultracapacitors can only store about 5 percent of the energy that lithium-ion batteries hold, limiting them to a couple of miles per charge. This makes them ineffective as an energy storage medium for passenger vehicles. But what ultracapacitors lack in range they make up in their ability to rapidly charge and discharge. So in vehicles that have to stop frequently and predictably as part of normal operation, energy storage based exclusively on ultracapacitors begins to make sense.

Sinautec Automobile Technologies, based in Arlington, VA, and its Chinese partner, Shanghai Aowei Technology Development Company, have spent the past three years demonstrating the approach with 17 municipal buses on the outskirts of Shanghai. On October 21, the two companies will offer a one-day demonstration at American University in Washington, DC, where an 11-seat minibus running on ultracapacitors will spend the day shuttling people around campus.

The trick is to turn some bus stops along the route into charge stations, says Dan Ye, executive director of Sinautec. Unlike a conventional trolley bus that has to continually touch an overhead power line, Sinautec's ultracapacitor buses take big sips of electricity every two or three miles at designated charging stations, which double as bus stops. When at these stations, a collector on the top of the bus rises a few feet and touches an overhead charging line. Within a couple of minutes, the ultracapacitor banks stored under the bus seats are fully charged.

"It's a brilliant concept," says ultracapacitor expert Joel Schindall, professor of electrical engineering and computer science at MIT. "It's not well suited for electric-only cars, but it is practical to stop a bus every few city blocks."

The buses can also capture energy from braking, and the company says that recharging stations can be equipped with solar panels (although this is mainly to further the perception that the vehicles have a lower carbon footprint). Ye says the buses use 40 percent less electricity compared to an electric trolley bus, mainly because they're lighter and have the regenerative braking benefits. They're also competitive with conventional buses based on fuel savings over the vehicle's 12-year life, based on current oil and electricity prices. Sinautec estimates that one of its buses has one-tenth the energy cost of a diesel bus and can achieve lifetime fuel savings of $200,000.

"The ultracapacitor bus is also cheaper than lithium-ion battery buses," says Ye. "We used the Olympics (lithium-ion) bus as a model and found ours about 40 percent less expensive with a far superior reliability rating." Ye adds that the environmental benefits are compelling. "Even if you use the dirtiest coal plant on the planet, it generates a third of the carbon dioxide of diesel when used to charge an ultracapacitor."

Buses in the Shanghai pilot are made by Germantown, TN-based Foton America Bus Co, which uses ultracapacitors manufactured by Shanghai Aowei. The ultracapacitors are made of activated carbon and have an energy density of six watt-hours per kilogram. (For comparison, a high-performance lithium-ion battery can achieve 200 watt-hours per kilogram.) Clifford Clare, chief executive of Foton America, says another 60 buses will be delivered early next year with ultracapacitors that supply 10 watt-hours per kilogram.

"The ones in Shanghai right now have been on the road for three years without incident, without failure whatsoever, which in the bus industry is phenomenal," says Clare, who adds that his company is in talks with New York City, Chicago, and some towns in Florida about trialing the buses. "It will end up being a third generation of the product, which will give 20 miles [of range per charge] or better."

Sinautec is also in discussions with MIT's Schindall about developing ultracapacitors of higher energy density using vertically aligned carbon nanotube structures that give the devices more surface area for holding a charge.

"So far we're able to get twice the energy density of an existing ultracapacitor, but that's not enough," says Schindall. "We're trying to get about five times." Schindall says that this would create an ultracapacitor with one-quarter of the energy density of a lithium-ion battery.

"Right now the [Foton] buses can only go every other stop, a range of about 5 or 10 city blocks, and that's okay for some routes, but here in the Boston area that would be too far [between charging spots]," Schindall adds. "If they could double that, or even quadruple that, it would increase by an order of magnitude the numbers of routes for which it could be a technical solution."

There are some other important limitations. The 41-passenger buses, based on current technology, lose 35 percent of their range when air conditioning is turned on, and have weak acceleration. But even under these conditions, they could still prove practical for municipal, campus, airport, and tourist buses.

"We want to replace a large portion of the diesel fleet in the United States," says Ye. "We do need to have charging stations throughout various points of the network, but as energy density goes up, the number of stations will go down."

By Tyler Hamilton

Nanopatterns Improve Thin-Film Solar Cells

Any given solar-cell technology has drawbacks and advantages. Thin-film solar cells, for instance, require less material than traditional solar cells, and are therefore cheaper, lighter, and flexible. And if those thin films are made with amorphous silicon, the cost is further reduced. The problem, however, is that thin-film solar cells made of amorphous silicon tend to have extremely low efficiencies compared to thicker, crystalline silicon photovoltaics.
Rainbow response: The absorption of light with a wavelength of 660 nanometers is greatest for the spacing and diameter of the holes in the dark-red portion of this graphic. The inset shows the layout of the cell



But now, research from Caltech shows that it's possible to increase the efficiency of thin-film amorphous silicon cells 37 percent--from 4.5 percent efficiency to 6.5 percent, which is still significantly lower than commercial crystalline silicon cells that achieve efficiencies of more than 30 percent--by simply adding a pattern of nanoscale holes to the electrical contact on the back side of cells. Importantly, the research, led by Harry Atwater, professor of applied physics and materials science at Caltech, appears to be practical for scaling up to large-scale production of the cells.

A number of researchers and startups are exploring thin-film solar cells made of nonsilicon materials. But, says Atwater, these materials are relatively rare, and as such, they aren't practical for widespread use. "These represent challenges at extremely large scale," he says.

Silicon has the great advantage of being abundant and of having a long history in the manufacturing of electronics. But as a thin solar cell, silicon is less than ideal. There is a mismatch between the distance it takes for photons to be absorbed in silicon and the distance traveled by electrons that produce electrical current. Essentially, electrons knocked out of position by photons tend to spring back to their spots before they can be collected, resulting in low efficiencies in converting sunlight to electricity. But, if the optical absorption can be improved, then more electrons overall can be collected, increasing efficiencies.

Researchers and companies are exploring various options for improving efficiencies in such cells. For example, StarSolar, an MIT spinoff in Cambridge, MA, is exploring photonic crystals, structures that reflect light many times within the solar cell to increase the chance it has to produce electrical current. But so far, the approach appears to be difficult to scale up.

Atwater's approach targets the back side of the solar cell, the metal electrical contact that sits below layers of "active" silicon material where photons are absorbed. Instead of using a grating to produce multiple internal reflections, he's using an array of holes 225 nanometers in diameter. When light hits metal with an array of holes at this scale, some interesting physics occurs. The energy from the light is essentially trapped onto the two-dimensional wave on the surface of the metal. The electrons in these surface waves, called plasmons, are easier to use for generating an electrical current than those in silicon, which quickly snap back into place.

In previous work, Atwater and others have explored plasmonics to improve efficiency in cells made of gallium arsenide, a semiconductor material commonly used in optics; they have also tried organic materials and even amorphous silicon. However, this previous work applied the plasmonic structures to the front of thin-film amorphous silicon solar cells; in this position they tend to absorb light of certain wavelengths and convert it to heat.

"One of the things that one sees with particles on front side of solar cells is a net loss of short wavelengths due to resonant absorption in the metal itself," says Atwater. "We want to avoid that."

"It was very clever to put the metallic structures in the back contacts," says Mark Brongersma, professor of materials science and engineering at Stanford. "Now, all the incident light gets at least one pass through the cell, and the plasmonic structures can be optimized to manage a few photons with a narrower spectrum." In other words, the size and spacing of the holes can be tweaked to take advantage of the wavelengths of light that make it through the silicon to the back contact.

To make the nanopatterned holes, Atwater's group uses a stamp that is able to imprint holes over the area of an entire silicon wafer. In a series of simple steps, the array of holes is formed in a thin layer of material on the silicon wafer, which is subsequently covered with metal. The active silicon material in the cell and the top electrical contact are then deposited on top of the patterned back side. The stamp can be used for thousands of imprints before it needs to be replaced.

Brongersma, who was not involved in the work, adds that this fabrication technique is certainly amenable to high-volume manufacturing. "The work also makes a big step forward by showing that we may be able to scale plasmonic photovoltaic up to large areas."

The researchers, who will publish the work in an upcoming issue of the journal Applied Physics Letters, have run simulations to determine the optimum hole diameter and spacing. To test the performance of different types of holes, the researchers focused on the efficiency of a single wavelength of light, 660 nanometers. Their simulations indicate that by increasing the diameter and reducing the depths slightly, they can improve absorption at that wavelength from 42 percent to 54 percent, which should further improve the overall performance of the entire photovoltaic, although it's unclear by how much.

By Kate Greene

Decoding the Brain with Light

Molecular "light switches" can reveal exactly which neurons are involved in creating a memory, allowing scientists to trigger that memory using only light. The finding, presented at the Society for Neuroscience conference in Chicago this week, is just one example of how a novel technology called optogenetics is allowing scientists to tackle major unanswered questions about the brain, including the role of specific brain regions in the formation of memory, the process of addiction, and the transition from sleep to wakefulness.
Light relief: Scientists use fiber-optic cables to control neural activity in mice, thanks to "light switches" that have been genetically engineered into specific neurons. The technology, called optogenetics, can be used to link specific neural circuits with different behaviors and diseases.




The technology, developed just four years ago by Karl Deisseroth, a physician and bioengineer at Stanford, and Ed Boyden, now a bioengineer at MIT, is already being used by hundreds of labs across the globe. Thanks to molecular tinkering and new fiber-optic devices that deliver light deep into the brain via an implant, researchers can use optogenetics to study the effect of neural stimulation on different behaviors in live animals.

To make neurons sensitive to light, scientists genetically engineer them to carry a protein adapted from green algae. When the modified neuron is exposed to light, via the fiber-optic implant, the protein triggers electrical activity within the cell that spreads to the next neuron in the circuit. The technology allows scientists to control neural activity much more precisely than previous methods, which generally involved delivering electrical current through an electrode.

Michael Hausser's team at University College London is using optogenetics to probe how memories are stored in the brains of mice. According to the basic model of memory formation, learning a new association--such as that a particular sound precedes an electric shock--activates a subset of neurons in part of the brain called the hippocampus. "It's thought that recall of the memory can be triggered by activating only a subset of the cells in that network," says Hausser. "But there's no clear, direct experimental evidence for any of the steps in the process."

Hausser and his collaborators genetically engineered the light-sensitive protein so that it would only be expressed in neurons in the hippocampus that were activated during the formation of a memory. Next, they taught the mice to fear a particular sound by pairing it with an electric shock. Hearing the sound then made the animals freeze in fear--and triggered production of the protein in activated brain cells.

The next day, researchers shone blue light on the animals' hippocampi. That triggered activity in only the subset of cells that fired during memory formation the day before, causing the animal to freeze in fear in response to light, rather than to the sound. The researchers also labeled these cells with a fluorescent marker, allowing them to count the number of cells involved in the creation of the memory. "A remarkably small number of neurons in these animals [are] sufficient to drive recall, on the order of 100 to 200 cells," says Hausser.

In addition to illuminating the most basic aspects of the brain, researchers are using the technology to better understand specific maladies such as depression, Parkinson's, and addiction, in hopes of improving treatments. Parkinson's disease, for example, can be treated using deep brain stimulation, in which a surgically implanted electrode delivers pulses to a specific structure deep in the brain. But the procedure is invasive and carries risk of side effects such as depression and cognitive dysfunction. Earlier this year, Deisseroth's team published details of research using the light switches to study the brain circuits involved in Parkinson's disease. They found that they could alleviate the motor deficits in animals with Parkinson's-like symptoms by activating neural targets much closer to the surface of the brain.

The findings raise the possibility of using noninvasive methods for stimulating the brain to treat Parkinson's patients, which Deisseroth and collaborators are now exploring. Transcranial magnetic stimulation (TMS), a way to activate parts of the brain using a magnet placed over the scalp, has already been approved by the Food and Drug Administration to treat depression. But studies using TMS to treat Parkinson's have yielded mixed results, probably because "people have been poking around different parts of the brain, not being guided by this kind of knowledge," says Deisseroth. In a new study, the researchers will first use sophisticated brain-imaging methods to try to identify in Parkinson's patients the human correlate of the spot identified in animal studies--the exact area will likely vary from person to person--and then target the stimulation specifically to that region.

Scientists are also using optogenetics to study depression, another disease that can be treated with electrical stimulation. They hope to tease out the brain areas responsible for the different symptoms associated with depression, such as fatigue, hopelessness, and lack of pleasure in daily activities.

Researchers mimic clinical depression in mice by subjecting them to several days of extreme social stress. After such stress, these normally social animals refrain from social interaction for the rest of their lives. Like clinical depression in humans, this impairment generates abnormal patterns of neural activity in part of the brain called the prefrontal cortex, and it can be alleviated with antidepressants.

Herbert Covington, a researcher in Eric Nestler's lab at Mount Sinai School of Medicine, in New York, made neurons in the prefrontal cortex of stressed mice sensitive to light. He then stimulated the animals' neurons using light delivered in a pattern similar to that seen in healthy mice exploring a new environment. Much like antidepressants, the light treatment made the previously fearful animals socialize normally with other mice.

"Depression is a complex mix of behaviors," says Covington. "Stimulating the prefrontal cortex can restore a social behavior. Next we will look at whether it can restore activity--will mice choose to do things they find rewarding, which is often a problem in depression." The findings might ultimately enable researchers to develop treatments targeting specific aspects of the disease.

It's not yet clear whether optogenetics technology will become a treatment itself or whether its major impact will be shedding light on disease. Two groups are already focusing on potential treatments: Ed Boyden at MIT has founded a startup to use optogenetics to restore sight to people with vision disorders by making damaged retinal cells sensitive to light, and a startup spun out of Case Western Reserve University in Cleveland, OH, plans to commercialize the technology to restore bladder control in paralyzed people.

By Emily Singer

Nuclear Power Renaissance?

Thirty years ago, in March 1979, a group of badly trained operators in the control room at Three Mile Island's unit 2 confronted a minor malfunction. The problem, a simple pump shutdown, was quickly made worse by an instrument panel that failed to inform the operators about a stuck valve and by an alarm system that overloaded after the first malfunction. The operators botched an attempt to solve the rapidly escalating problem, allowing a small leak to drain most of the cooling water out of the $700 million reactor. In about two hours, they converted America's newest nuclear plant, which had begun commercial operation just three months earlier, into a $1 billion liability.
Signing up: A year after the accident at Three Mile Island, protesters gathered at the site to mark the anniversary and to demand that the nuclear power plant shut down. The industry stopped planning and building new reactors for decades, but interest has lately revived.


The event at the reactor, near Harrisburg, PA, provoked near-panic, and although government reports said the maximum possible radiation exposure was too small to have much effect on human health, one major casualty was the outlook for the nuclear industry itself. The meltdown did not end the first round of nuclear construction in this country; 50 reactors already under construction were completed after the accident, and orders for new plants had effectively ceased anyway. (The last order for a nuclear plant that was actually built came in 1973.) But for years to come, it remained unthinkable to plan new reactors as part of the nation's energy portfolio.

Given pressures to reduce carbon dioxide emissions from fossil-fuel power plants, however, construction of nuclear plants could be poised to begin anew. The technology has grown more reliable and more efficient. Reactors now run 90 percent of the hours in a year, compared with less than 60 percent in 1979, effectively cutting the capital cost of a kilowatt-hour by about a third. Meanwhile, other sources of power have started looking a lot worse. Congress seems likely to put some kind of price tag on carbon dioxide emissions, so the price of coal-produced electricity could rise by 30 to 50 percent. The price of natural gas is low right now but has been more volatile than the price of oil in the past few months as surging supplies and lackluster demand play leapfrog. Such volatility makes electric companies reluctant to rely heavily on gas.

All the same, the nuclear industry faces tremendous risks, though their nature has changed since 1979. As the possibility of an accident that panics or injures the neighbors has diminished, the likelihood has grown that even a properly functioning new reactor will be unable to pay for itself. And changes in the utility industry since 1979 mean that this time, the money a company wastes may be its own.

Whether new nuclear plants are a good bet economically depends on three factors, all now in flux. First is the cost of a new reactor. In 2005, a few would-be reactor builders said they could construct a facility generating 1.2 to 1.6 gigawatts for $2,000 per kilowatt of capacity. Now, they put the cost at $4,000 per kilowatt. Neither price includes interest charges accrued during construction, which could be substantial if the job takes more than the five years or so that the builders predict--or if interest rates rise, as they are expected to. The Electric Power Research Institute, a utility consortium based in Palo Alto, CA, recently put the capital cost of a new coal plant at under $3,000 per kilowatt and that of a natural-gas plant at $800 per kilowatt.

The second factor is uncertainty about possible future competitors. If 10 years from now wind or solar plants, or coal plants that capture their carbon emissions, are able to deliver vast amounts of cheap power, the market price of electricity will fall, and plant owners may never see enough revenue to meet their costs.

The third factor is uncertainty about the price of fossil fuels, particularly natural gas. In the last year, the fuel cost for a kilowatt-hour generated from natural gas has varied from about 2.3 cents to about 9 cents. If a federal cap-and-trade system or a tax on carbon dioxide emissions is instituted, that is likely to add 0.5 to 1.5 cents per kilowatt-hour. Add in 2 cents or more to recover the cost of building the plant, and the price of gas-fired power could make nuclear power look very attractive--or really overpriced.

A power-producing company that bets on natural gas can choose the size of its wager: a 100-megawatt plant, or a 500-megawatt or 1,500-megawatt one. Conventional nuclear plants come in only one size: jumbo. Some power companies have proposed smaller plants, but costs for factors like labor and security are mostly insensitive to size, so these costs per kilowatt-hour rise as the plant shrinks. Costs for engineering and materials are also greater per kilowatt-hour the smaller the plant is.

All these economic risks matter for nuclear power now, because the electricity marketplace has changed dramatically since the industry was deregulated in the 1990s. Before that, each plant's output was paid for by consumers, no matter what the cost. As a result, millions of consumers got stuck paying more than they should have, because their local utilities unwisely chose nuclear instead of coal or natural gas. The financial rules differed from state to state, but generally, once a plant was in service, a company could collect a specified return on its investment, and if a plant projected to cost $1 billion ended up costing $2 billion, the customers paid.

In today's electricity market, however, producers in many states are paid according to market price. Companies build a plant for whatever price they can manage and sell electricity for whatever price they can get. If a reactor produces power at 10 cents per kilowatt-hour and a natural-gas plant produces it at 12 cents, the reactor builder makes a killing. Reverse the numbers and the reactor builder gets killed.

The electricity industry won't build much of anything these days without government help, in the form of loan guarantees, production tax credits, guaranteed markets, or, preferably, all three. Wind now gets bigger production subsidies than nuclear on every kilowatt-hour generated, proportionally more loan guarantees, and a guaranteed market: many states insist on a certain quota of renewable energy, sometimes regardless of cost. In contrast, nuclear power receives production subsidies on only the first 6,000 megawatts of capacity (four or five reactors' output), and its pool of loan guarantees is shrinking relative to the price of construction.

"Right now, the federal incentives are much more conducive to pushing forward renewables," said Jim Miller, the chief executive of the energy company PPL, in June. His company, based in Allentown, PA, would like to build a reactor but will not do so without federal loan guarantees. It will not get them, at least not under the 2005 Energy Policy Act, in which Congress approved only enough to assist a handful of plants: $18.5 billion. "Nothing is currently in place to move the nuclear industry along at the pace people perceived it would move when the 2005 act was passed," Miller says.

The idea of the legislation was that Congress would spoon-feed financial aid to the first half-dozen or so new nuclear plants, and others would follow on their own once new designs were demonstrated and a reformed licensing process was in place. Now, it looks as if those half-dozen new reactors will be the limit of the "renaissance," unless more help is forthcoming. The industry lacks the votes in Congress to expand the loan-­guarantee program. Subsidies for wind and solar power are popular, in part because they can be justified as aid to emerging technologies. But many legislators feel that nuclear is less deserving of taxpayer support.

Even now, nuclear power has the potential to be economically attractive if costs and competition are favorable--and if overall demand for power remain strong, with high industrial use and limited improvements in efficiency.

All of that is possible. But the odds are probably not good enough for the nuclear industry to place a bet with its own money. Only the government can agree to back up that bet, and it has yet to do so.

Matthew L. Wald is a reporter at The New York Times. His feature "The Best Nuclear Option" appeared in the July/August 2006 issue of Technology Review.

By Matthew L. Wald

Cell Phones to Go 3-D


A new thin-film technology developed by 3M could enable mobile devices such as cell phones to show 3-D images without the need for special glasses.

Dubbed Vikuiti 3-D, the technology works by guiding slightly different images to the viewer's left and right eyes. Provided that the device is held relatively still, the viewer experiences an "auto-stereoscopic" effect--a sense of depth to the image, says Erik Jostes, business director of 3M's Optical Systems Division in St. Paul, MN.

This optical trick has been around for some time and is essentially the same as the one behind Philips's WOWvx 3-D television displays. However, getting it to work in mobile devices presents new challenges.

In Vikuiti 3-D, prism-shaped reflective structures are embedded on the back of a polymer film, and tiny microlenses are patterned on the front. Together these components steer lights through a liquid-crystal display in front of the film. Light passes through the film from two light-emitting diodes, one positioned to the left and one to the right. The light from each LED bounces off a waveguide and strikes the film at a different angle, causing the embedded optics of the film to steer the light in two different directions.

Because each beam of light passes through a liquid-crystal display showing a slightly different image, providing the display is held at the correct distance, each eye receives a slightly different perspective. To trick the viewer's brain into believing it is seeing the two images at the same time, both the LEDs and the LCD panels have to be switched extremely fast--about 120 times a second, says Jostes.

Mobile devices tend to have both smaller displays and smaller pixels, says David Pepy, general manager of Alioscopy, a company based in Paris, France, that is also developing auto-stereoscopic displays. This means the lens-like structures on the film need to be particularly small, he says.

Not only do the lenses have to be very precisely engineered, Jostes says, but each lens has to be very precisely aligned with the corresponding prism on the back of the film. To achieve this, 3M uses a process called microreplication, a proprietary printing technique that can produce structures tens of micrometers thick in a film just 75 micrometers thick, Jostes says.

With the movie and games industries already working on 3-D content for cinemas and televisions, Jostes believes that the next logical step is the mobile market. The first products featuring the Vikuiti 3-D film have already begun hitting the market in Asia, he says.

However, auto-stereoscopic displays have serious drawbacks, says Armin Schwerdtner, chief scientific officer of SeeReal Technologies in Dresden, Germany, which makes a competing kind of 3-D display. The so-called parallax effect, for instances, occurs when a viewer's head moves and her 3-D perspective is destroyed, which can be nausea-inducing. For this reason, says Schwerdtner, most display companies are still focusing on 3-D techniques that use glasses. "We abandoned [the auto-stereoscopic approach] because we found there were human factors that caused problems," he says.

Jostes argues that most people are used to holding their mobile devices relatively still already. Furthermore, he says, the Vikuiti 3-D approach allows for better resolution and brightness. "And what's nice about it is you can switch between 3-D and 2-D," he says. Displaying identical images on the LCD panel would give both eyes the same 2-D perspective.

Pepy of Alioscopy doubts that 3-D mobile displays will ever be more than a gimmick. "If you want to target the mobile market you have to provide a complete system, you need the capability to take pictures and video in 3D and to send them," he says. "And on a mobile display the depth effect will be very small."

Steven Smith, a 3-D display researcher at De Montfort University in Leicester, UK, disagrees. While the depth perception provided by a mobile device may not be very great, "you don't need a lot of depth cues to make it interesting," he says. "I think 3M's timing might be good."

By Duncan Graham-Rowe

Black Hole Conditions, Right Here on Earth

A team of researchers has created conditions analogous to those found outside of a black hole by blasting a plastic pellet with high-energy laser beams. The advance should sharpen insights into the behavior of matter and energy in extreme conditions.
Boom! After being hit with laser beams, a small plastic pellet (sunlike object) emits x-rays, some of which bombard a pellet of silicon (blue and purple).



Astronomers can't observe black holes directly because their immense gravity won't let light escape. Instead, they have focused on what they can see, namely, the surrounding cloud of swirling matter, known as an accretion disk. When crunched and heated by the black hole's gravitational energy, these disks glow in x-ray light. Analyzing the spectra of these x-rays gives researchers clues about the physics of the black hole.

Scientists don't know precisely how much energy is required to produce such x-rays, however. Part of the difficulty is a process called photoionization, in which the high-energy photons conveying the x-rays strip away electrons from atoms within the accretion disk. That lost energy alters the characteristics of the x-ray spectra, making it more difficult to measure precisely the total amount of energy being emitted.

To get a better handle on how much energy those photoionized atoms consume, researchers at Osaka University in Japan attempted to recreate conditions in the region of an accretion disk that would be nearest a black hole. They zapped a tiny plastic pellet with 12 laser beams fired simultaneously and allowed some of the resulting radiation to blast a pellet of silicon, a common element in accretion disks.

The synchronized laser strikes caused the plastic pellet to implode, creating an extremely hot and dense core of gas, or plasma. That turned the pellet into "a source of [immensely powerful] x-rays similar to those from an accretion disk around a black hole," says physicist and lead author Shinsuke Fujioka. As Fujioka and colleagues report online this week in Nature Physics, the x-rays photoionized the silicon, and that interaction mimicked the emissions observed in accretion disks. By measuring the energy lost from the photoionization, the researchers could measure total energy emitted from the implosion and use it to improve their understanding of the behavior of x-rays emitted by accretion disks.

"Fujioka et al. have shown us a versatile new way to explore the processes at work near black holes," says physicist R. Paul Drake of the University of Michigan, Ann Arbor. He says detailed examinations of the data should identify areas for improvement "in interpreting similar data from astronomical observations."

By Phil Berardelli
ScienceNOW Daily News
19 October 2009

Tracking Devious Phishing Websites

In the world of online fraud, as in real life, the longer miscreants can operate without being caught, the more money they stand to make. And experts have discovered that many phishers--crooks who use fake websites to trick users into giving up valuable personal information--have found a trick that makes it harder for the good guys to block or shut them down.

Gone phishing: Researchers from Indiana University--left to right, Andrew Kalafut, Youngsang Shin, and Minaxi Gupta--are studying a trick used to make phishing sites harder to detect and block.



The trick, dubbed "flux," allows a fake site to change its address on the Internet very quickly, making it hard for defenders to block these sites or warn unsuspecting users. According to research recently published in the journal IEEE Security and Privacy, about 10 percent of phishing sites are using flux to hide themselves.

Flux makes use of the Internet's domain name system, which is responsible for matching a Web address typed into a browser with the server that actually hosts a site. When a user tries to visit a Web page, the domain name system first directs the user to a name server, which maintains an up-to-date list of site addresses. This name server then tells the user's browser where to find the desired site.

Normally, only a small number of machines host copies of a site--just enough to keep it going if something goes wrong. Fraudulent sites, however, are a different story. Phishing sites are often hosted through botnets--thousands of hijacked machines distributed across the globe.

"These machines don't belong to the miscreants, they belong to you and I and our grandmothers," says Minaxi Gupta, an assistant professor of computer science at Indiana University who was involved with the research. Because phishers have access to so many machines, she explains, they can use all of them to move a site around rapidly, throwing defenders off the scent while keeping the website available.

To use flux, a phisher needs to control a domain name, which gives him the right to control its name server. The phisher then sets the name server so that it directs each new visitor to a different set of machines, cycling quickly through the thousands of addresses available within the botnet. Gupta notes that flux is most effective when the phisher shifts the location of the name server as well. If the name server is also moving to different locations on the Internet, it's doubly hard for defenders to pinpoint a central location where the fake website can be shut down. Gupta's group found that 83 percent of phishing sites that used flux this way lasted more than a day before being blocked, compared with a 65 percent survival rate for sites that didn't use flux.

The group also identifies methods for detecting flux and suggests that flux detection should be built into the domain name system itself. Since using the technique likely means a site is fraudulent, the system itself could help protect unsuspecting users from visiting these sites.

Shortening detection time by even a few hours can make a significant difference, says Alper Caglayan, president of Milcord, a company based in Waltham, MA, that collects real-time data about botnets. "If they can operate even a day, they've already made too much money," he adds.

Caglayan notes that there are some legitimate ways to use flux--for example, to deliver multimedia content efficiently--but says that the way a botnet uses flux should look different. For example, a botnet's machines are scattered around the world in a pattern that wouldn't make sense for a legitimate business.

Some experts believe that a multipronged approach is needed to stop phishing sites. Caglayan's company provides a service that helps Internet service providers and other large network administrators find and shut down infected machines within their networks.

Some Web browsers also use blacklists to warn users away from fraudulent sites. But tricks like flux make it almost impossible for those blacklists to stay current enough to be useful. Caglayan expects that, in the future, browsers will need to build in systems that can detect fraud on their own.

Detecting flux will only help people who are using blocking services of some kind, says Manoj Srivastava, chief technical officer of Cyveillance, a security company based in Arlington, VA. "To effectively deal with an attack involving fast flux, it is necessary to take the domain off the Internet, and that requires working with either the registrar or registry of that domain," he says. This can be hard because some domains are located in countries with loose regulations for Internet fraud. Simpler obstacles such as a language barrier can also leave a fraudulent site in operation for a longer period of time.

Gupta says that, as with most Internet crime, flux is a just one component in a larger game of cat and mouse. "You can't win this game," she says. "You just have to continually detect their means and adjust to them."

By Erica Naone

Making Heart Muscle

A functioning strip of heart muscle has been created from mouse embryonic stem cells, thanks to the identification of a new type of cardiac stem cell. The research has not yet been repeated with human cells, but it lays a blueprint for how to generate heart muscle that could be used to repair damage from heart attacks and to test new drugs. The scientists, from Harvard University, are now working on isolating similar cardiac cells from lines of human stem cells.
Patching hearts: Scientists from Harvard have genetically engineered mice to express two different colored markers in specific heart cells (shown here in red and green), allowing them to isolate cardiac stem cells that produce only heart muscle. Researchers used the cells to create heart patches.




Stem-cell therapy for heart disease has so far focused on trying to repair heart-attack damage with injections of patient-derived stem cells from bone marrow, but studies have yielded mixed results. Rather than using undifferentiated cells, "the push now is to try to obtain cardiac myocytes [heart muscle cells] from people and use them as patches that would be placed over damaged tissue in someone who has had a heart attack," says Benoit Bruneau, a researcher at theGladstone Institute of Cardiovascular Disease, in San Francisco. "They made engineered cardiac tissue from embryonic stem cells. From a bioengineering point of view, that's significant."

Embryonic stem cells, which are capable of forming any type of tissue in the body, can spontaneously form clumps of beating heart cells when grown in a dish. But it has been difficult to isolate large numbers of these cells from the mix of tissue types that can develop from embryonic stem cells. A heart patch would require a huge number of these cells, perhaps billions, says Christine Mummery, a biologist at the Leiden University Medical Center, in the Netherlands.

The Harvard team, led by Kenneth Chien, director of the Massachusetts General Hospital Cardiovascular Research Center, in Boston, has made progress toward this goal, previously developing a method of isolating a master cardiac stem cell from embryonic stem cells and fetal tissue--one capable of producing all the cell types that make up the heart. In the most recent study, published today in the journalScience, Chien's team developed a way to isolate particularly desirable progeny of this master stem cell, cells that produce only ventricular muscle cells, the type damaged in heart attacks. "If you want to create a cardiac patch, you want cells that will behave--that would line up nicely like they do in the heart," says Bruneau.

Scientists genetically engineered mice to express two markers of different colors--one that marked the master cardiac stem cell, and the other that turns on when the cells start making muscle. They then isolated the 0.5 percent of cells in the developing mouse embryo that expressed both markers. In addition to making only ventricular muscle cells, these cells also have the ability to continue to reproduce, enabling the production of large volumes of cells. "This ability to divide and make muscle is something that normal heart cells do not have," says Chien.

Using a technology previously developed by Chien's collaborator, Kevin "Kit" Parker, a bioengineer at Harvard, researchers then grew the cells on a thin polymer film that had been patterned with molecules typically found outside of cells, such as collagen. "The cells recognize the geometric cues on the film and reorganize themselves to spontaneously form a piece of cardiac tissue," says Parker. The cells can contract, and they express the same genes as those expressed by normal heart muscle. (See a video of the muscle cells contracting.) "We can use them to test new drugs, as well as the safety of different drugs, chemicals, and nanomaterials," says Parker. "We can also graft them onto the heart and restore contractility of that injured region of the heart."

The researchers still have several steps to surmount before they can test how well the patches will repair the heart. They must find ways to isolate human versions of these cells. To make patches that are therapeutically useful, they must create three-dimensional versions of the two-dimensional patches of muscle cells. That will require the addition of blood vessels to feed the muscle. "We are working on additional technology to template in a vascular system within the cardiac tissue," says Parker. "Once we're comfortable with that, we will take it into animals."

Ultimately, scientists would like to generate these heart-muscle-producing cells from induced pluripotent stem cells, a type of adult stem cell that can be made from a patient's skin cell. This would allow physicians to take a skin biopsy from a heart-attack patient, generate a heart patch that is genetically matched to the patient, and implant it over the damaged heart tissue.

By Emily Singer

Head-up Displays go Holographic

In the last few years, head-up displays (HUDs), which project information onto the driver's view of the road, have started appearing in a few high-end cars. But a more compact kind of projection device, small enough to fit inside a rearview mirror, could see this kind of display more widely deployed.

A head-up display overlays information on a normal view of the road. For example, symbols can be used to show the car's current speed or the distance to the vehicle ahead without the driver having to look away from the road.

Wing vision: Holographic projection creates a head-up display in a vehicle's wing mirror.



The new projection device, developed by Light Blue Optics, based in Cambridge, UK, uses a technique called holographic projection that allows it to be far smaller than current in-car HUD systems. "We can make an HUD so small you can put it into a rearview mirror or wing mirror," says Edward Buckley, Light Blue Optics's head of business development.

Details of Light Blue Optics's prototype were presented today at the Society for Information Display's Vehicles and Photons 2009 symposium, in Dearborn, MI. The prototype projects an image through a two-way wing mirror so that it appears to be about 2.5 meters away, superimposed over the reflected road scene. The picture appears to originate from a point in space in front of the mirror, only from a narrow perspective.

Existing HUDs require relatively large liquid-crystal arrays and optics to generate an image, says Buckley. "In a BMW 5 Series, the size is about five litres," he says. "We can make it about one-tenth of the size. This means you can start to put these virtual image displays where you couldn't previously."

Holographic projection uses constructive and destructive interference of light to make up the picture, allowing the device to be much smaller. "[Size is] the number-one detriment in getting HUDs into vehicles," says Mark Larry, an expert on in-car displays at Ford who co-chaired the symposium.

Holographic projectors use liquid crystal on silicon (LCOS) to modulate beams of red, green, and blue laser light to create a complete image. Holographic projection does not actually involve creating a hologram, but rather uses principles of holography to create a projected image through optical interference. Buckley says the technology could work equally well on a forward-facing display such as a windshield.

The image appears in focus behind the front of the mirror. "You get the optics to define a sort of point in space where the driver can see the image, but outside that, there is nothing," Buckley says.

The big advantage of HUDs is improved vehicle safety, Buckley says. It takes a certain amount of time for the muscles in the eyes to adjust their focus, which has safety implications. "At speeds of 100 kilometers an hour, this can cost you 22 meters in stopping distance," he says.

Steven Stringfellow, a lead engineering specialist in HUDs for General Motors in Warren, MI, notes that HUDs are becoming increasingly common. "Once someone drives with one, the universal reply is that they never want another car without one," he says. "The safety benefits become obvious in daily use. More features are being added as higher-resolution displays become available."

It is not the first time holographic projection has been explored for vehicle HUDs, says Sven Krueger, founder of Holoeye, a German company that is also exploring their use. But historically, the use of lasers has created a speckle effect--bright specks of light occur in the field of vision, typically caused by aberrations in the mirror. "And you have to have enough processing power to generate the holograms in real time," he says.

Light Blue Optics's core technology includes a more efficient hardware chip as well as software for driving the holographic engine.

"We see holographic projection still at an early stage with these hurdles to overcome," says Krueger. "We're not there yet."

Light Blue Optics is in discussions with several manufacturers, but "it takes at least four years to bring a mature research concept to market," says Buckley.

By Duncan Graham-Rowe

Building the World's Most Powerful Laser

This March, researchers at the National Ignition Facility demonstrated a 1.1 megajoule laser designed to ignite nuclear fusion reactions by 2010. But the facility's technology, which is housed at the Lawrence Livermore National Laboratory in California, cannot yet generate enough energy to drive a practical power plant. So, even as physicists look forward to next year's demonstration, they're working on even more powerful lasers that could make possible a method for a kind of laser-induced fusion called fast ignition.
Power up: This laser can deliver a 200-joule pulse of light lasting just 100 femtoseconds. The cables at left pump power to green flash lamps that pump the laser.



This week, at the annual meeting of the Optics Society of America in San Jose, CA, researchers from the University of Texas presented plans to build an exawatt laser that would be three orders of magnitude more powerful than anything that exists today. Today's most powerful lasers operate on the order of about a petawatt, or 10 to the power 15 (one quadrillion) watts. An exawatt is 10 to a power of 18 watts. Exawatt lasers will be able to concentrate that power in areas measuring micrometers, creating tremendous intensities.

One way to increase the power of a laser is to decrease the duration of the laser pulse. But working with laser pulses on the order of picoseconds or even femtoseconds is difficult because such pulses are made up of a wide bandwidth of light frequencies that damage optical glass, including the phosphate glass often used to amplify laser light, for example at the National Ignition Facility.

Todd Ditmire, director of the High Intensity Laser Science Group at the University of Texas at Austin, reported at this week's meeting that a new type of glass should be able to handle the intense pulses of light needed to create an exawatt laser. The glass would be doped and used to create devices called amplifiers--when light from a laser shines on the glass amplifier, ions in the glass absorb the light and re-emit it at higher energy. "The glass is just a host--it's a transparent material that holds the ions," says Ditmire.

The advantage of sticking with glass instead of another material is that manufacturers can readily make it into large devices, which increases the power of the resulting beam. In contrast, titanium sapphire can act as an amplifier for high-power lasers, but it's difficult to make in big pieces, says Ditmire. Working with German manufacturer Schott, the Texas group has begun characterizing the properties of their new type of glass, which combines silicate, the material that makes up everyday glass objects, with the metal element tantalum. Ditmire says his group is now working with Schott to create larger pieces of the material that will be assembled to make a prototype laser.

Ditmire expects that the first application of exawatt lasers will be as energy sources for medical particle accelerators. Bombarding tumors with protons causes fewer side effects than x-ray therapy because the protons release their energy all at once, sparing surrounding tissues. However, proton therapy hasn't come into wide use because it requires large particle accelerators. Compact exawatt lasers should be powerful enough to accelerate protons for medical therapy.

But the most exciting potential application for exawatt lasers is in fusion power plants that rely on a process called fast ignition. In the early stages, the National Ignition Facility will use petawatt lasers to compress a pellet of gold fuel until it heats up to 100 million °C, triggering fusion. Also at the conference this week, researchers from the facility reported that they've completed another step along the way to controlled fusion reactions, describing preliminary tests of their system using a 500,000-joule pulse to implode a fusion fuel pellet.

Fast ignition works differently. Instead of a single pulse, the technique would use lower-power lasers to "compress the fuel without worrying about heating it, and then a short-pulse [exawatt] laser that acts as a spark plug," igniting the fusion reaction, says Ditmire.

"Whether this will work is controversial," Ditmire admits. Aiming such a short pulse might be problematic. In theory, though, the fast-ignition process should take less energy to operate. The most important measure of the performance of a fusion reactor is its gain, or the ratio of the energy required to operate the lasers to the amount of energy produced by the reaction. The Livermore facility's goal is a gain of 15 to 20. "You need a gain of 100 to make a fusion power plant, and calculations show that exawatt lasers could get it," says Ditmire.

But the new glass material isn't the only key to building an exawatt laser. Ditmire's group has also had success with new amplification techniques for making very short-duration pulses using the university's Texas Petawatt Laser. According to Ditmire, the trick to making very high power is a technique called chirping, in which different frequencies of light are separated, run through glass amplifiers, and then run through a compressor to put them back together into a single, higher-power pulse. The Texas group's method combines different types of glass amplifiers for this process, allowing for more compression of the light and therefore increasing the power output further. At the meeting, Ditmire reported using this technique to create 100-femtosecond pulses.

Ditmire isn't the only researcher pushing for the development of exawatt lasers. The inventor of chirping, Gérard Mourou of the Ecole Polytechnique in France, is spearheading a European exawatt laser project called ELI, or Extreme Light Infrastructure. The European group plans to use titanium sapphire amplifiers instead of conventional glass.

By Katherine Bourzac

Seamlessly Melding Man and Machine

A novel implant seeded with muscle cells could better integrate prosthetic limbs with the body, allowing amputees greater control over robotic appendages. The construct, developed at the University of Michigan, consists of tiny cups, made from an electrically conductive polymer, that fit on nerve endings and attract the severed nerves. Electrical signals coming from the nerve can then be translated and used to move the limb.

"This looks like it could be an elegant way to control a prosthetic with fine movement," says Rutledge Ellis-Behnke, a scientist at MIT who was not involved in the research. "Rather than having a big dumb piece of plastic strapped to the arm, you could actually have an integrated tool that feels like it's part of the body."

Living interface: Muscle cells (shown here) are grown on a biological scaffold. Severed nerves remaining from the lost limb connect to the muscle cells in the interface, which transmits electrical signals that can be used to control the artificial arm.



Today, movement of most prostheses is effortful and limited. The limbs are controlled by conscious movement of remaining muscle--the wearer might contract a chest muscle to move the arm in a certain direction, for example. Wiring residual nerves directly to artificial limbs would provide a more intuitive way to control them. But efforts to build peripheral nerve interfaces have been hampered in large part by the growth of scar tissue, which limits the utility and durability of implanted devices.

The most successful method for controlling a prosthesis to date is a surgical procedure in which nerves that were previously attached to muscles in a lost arm and hand are transplanted into the chest. When the wearer thinks about moving the hand, chest muscles contract, and those signals are used to control the limb. While a vast improvement over existing methods, this approach still provides a limited level of control--only about five nerves can be transplanted to the chest.

The new interface, developed by plastic surgeon Paul Cederna and colleagues, builds on this concept, using transplanted muscle cells as targets rather than intact muscle. After a limb is severed, the nerves that originally attached to it continue to sprout, searching for a new muscle with which to connect. (This biological process can sometimes create painful tangles of nerve tissue, called neuromas, at the tip of the severed limb.) "The nerve is constantly sending signals downstream to tell the hand what to do, even if the hand isn't there," says Cederna. "We can interpret those signals and use them to run a prosthesis."

The interface consists of a small cuplike structure about one-tenth of a millimeter in diameter that is surgically implanted at the end of the nerve, relaying both motor and sensory signals from the nerve to the prosthesis. Inside the cup is a scaffold of biological tissue seeded with muscle cells--because motor and sensory nerves make connections onto muscle in healthy tissue, the muscle cells provide a natural target for wandering nerve endings. The severed nerve grows into the cup and connects to the cells, transmitting electrical signals from the brain. Because it is coated with an electrically active polymer, the cup acts as a wire to pick up electrical signals and transmit them to a robotic limb. Cederna's team doesn't develop prostheses themselves, but he says the signals could be transmitted via existing wireless technology.

So far, scientists have tested the interface in rodents with a severed peripheral nerve, showing that the nerve will grow into the cup and make connections with the muscle cells. "If they can keep the end of the neuron intact in that area, that's a major breakthrough," says Ellis-Behnke. The nerves in rats are about the same size as those that would be targeted in humans. The research was presented today at a conference of the American College of Surgeons in Chicago.

The device can also feed sensation back into sensory nerves, which relay heat, pressure, and other information from the skin to the brain. Like motor nerves, sensory nerves make connections onto the muscle cells in the cup. In rodent tests, scientists capped two nerves in a single animal--one motor and one sensory. While the rat did not have a prosthesis, scientists were able to show that the implant could bridge the severed nerve, transmitting neural messages across it; tickling the rat's foot triggered muscle cell activity in the implant.

Sensory capability is a major missing component of today's prostheses--tactile, pressure, and temperature feedback is vital for picking up a fragile egg or a hot pan. Ultimately, prosthetic limbs could be outfitted with heat or pressure sensors that could transmit that information to muscle cells in the interface and allow this information to be sent to the brain.

The research is still in its early stages, and a number of questions remain to be answered. "We need to find out how long it takes for the connections to become functional, and what the durability and robustness will be," says Joseph Pancrazio, a program director at the National Institute for Neurological Disorders and Stroke, who was not involved in the research. "But it looks very exciting." The research is funded by the Department of Defense.

One of the major issues with neural implants to date has been the stability of the devices, because implanted electrodes often become coated in scar tissue and stop working. So far, for the six months that the scientists have been assessing the interfaces in rats, there have been no signs of scarring. While scientists aren't sure why, it may be that the cup protects the implant from the inflammatory reactions that lead to scarring, or that providing a target for the nerve cells dampens these reactions altogether by recreating a more normal environment for the severed nerve. The researchers are now monitoring the implants on a daily basis to determine their durability over time.

One particularly promising early finding, however, is that the tissue surrounding the interface grows new blood vessels to feed the implanted muscle cells, supplying them the nutrients they need to survive.

It's not yet clear how many of these nerve caps patients would need for adequate control over a sophisticated artificial limb. Someone who has lost their arm at shoulder level, for example, would need enough nerve caps to flex and extend the elbow, wrist, and fingers, as well as those for sensory nerves. "The only limit," says Cederna, "is going to be how high-tech they can make the prosthetics."

By Emily Singer

Mother's Cancer Can Infect Her Fetus

A startling case in Japan has confirmed that pregnant women with cancer can pass the disease to their fetuses. These transmissions, normally blocked by the placenta, are rare, so the work likely won't change how doctors screen or care for pregnant women. But scientists say the case could help illuminate how cancer foils the body's immune system.
Vulnerable. In rare cases, a fetus can contract cancer from its mother.



In early 2007, a 28-year-old Japanese woman gave birth to a girl. Thirty-six days later, the mother was hospitalized with vaginal bleeding, which became uncontrollable. Doctors diagnosed leukemia, and she soon died. The baby developed normally until age 11 months, when a huge tumor appeared in her cheek. A biopsy determined the cancer was not sarcoma--a cancer of certain connective tissues--but a leukemic tumor somehow trapped in the child's cheek.

The doctors alerted cell biologist Mel Greaves of the Institute of Cancer Research in Sutton Surrey, United Kingdom, who studies transmissible cancers. Scientists had suspected mother-to-fetus cancer in other cases with strong circumstantial evidence (especially with leukemia and melanoma, which both metastasize readily). But no one had done genetic tests to prove the cancer had grown from a single source and wasn't just an unfortunate coincidence.

In their investigation, Greaves and colleagues discovered incipient cancer cells in routine blood samples taken from the child at birth, strongly suggesting that the transmission happened in utero. They also examined a DNA sequence unique in each case of leukemia, the BCR-ABL1 sequence. It was identical in mother and daughter. Finally, tests showed the child's cancer cells were almost all maternal cells, with no genetic material from the father. This indicated that the transmission path was mother to fetus, not the reverse. The team lays out its evidence in a paper published online 12 October in the Proceedings of the National Academy of Sciences.

Greaves and colleagues also determined how cancer survived inside the fetus, whose immune system should have destroyed the mother's cells. They found that the cancer cells were missing a large region from a stretch of the sixth human chromosome known as 6p, which produces surface markers that immune cells latch on to. In short, Greaves says, "the cancer succeeded because it was immunologically invisible."

Knowing the molecular details of how the cells evaded detection will help scientists probe how other cancers slip by our immune system, says Howard Weinstein, a pediatric cancer specialist at Massachusetts General Hospital in Boston.

Despite the findings, mothers shouldn't panic, says Greaves. With only a few dozen cases of mother-fetus cancer transmission reported since the first, in 1866, the risk for pregnant women is minimal, he says. And transferring advanced cancer to infants is not necessarily fatal--the Japanese girl was successfully treated and is still alive. But Greaves says his team's work questions the assumption that the placenta provides a wholly effective barrier between mothers and fetuses. "I'm more inclined to think that maybe cells get by in modest numbers all the time," he says. "You can learn a lot from very odd cases in medicine."

By Sam Kean
ScienceNOW Daily News
13 October 2009