Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........


Air Force Researchers are Building Simple Quantum Computers Out of Holograms

In a paper far too daunting for a Monday, researchers at the Air Force Research Lab (AFRL) have described a novel way to build a simple quantum computer. The idea: rather than using a bunch of finicky interferometers in series to measure the inputs and outputs of data encoded in photons, they want to freeze their interferometers in glass using holograms, making their properties more stable. 

 Quantum Computing with Holograms Just like that Warner A. Miller, Grigoriy Kreymerman, Christopher Tison, Paul M. Alsing, Jonathan R. McDonald

Quantum computing requires encoding information into a quantum medium, and light is the most obvious choice. Photons don’t have mass and therefore don’t interact much with external forces; things like electrical interference or magnetic fields don’t mess with the quantum state, and photons travel straight through transparent matter (like fiber optic cable or ambient air). But light is also a bit tricky because photons don’t interact with each other well either. Processing information in a photon at the receiving end can be particularly problematic. To make quantum computing work, researchers generally use interferometers, which basically make photons interact in a way that is diagnostic of the state of the photons. That’s a roundabout way of saying, interferometers enable quantum computations by basically being the read and write devices for photons, with the output of one interferometer feeding the input for the next.

But interferometers aren’t easy to work with. They lose their calibration easily, so stringing together a series of interferometers to conduct more complex calculations isn’t easy to do. So the AFRL team had an idea: why not freeze the properties of the interferometers in place by translating them to holograms “frozen” in a piece of tempered glass. That way researchers could stack the holograms to perform simple quantum functions without worrying about them losing their properties. There’s an off-the-shelf commercial product called OptiGrate that is apparently pretty ideal for this kind of holographic freezing.

Of course, there are drawbacks. For one, OptiGrate is one-time write-only, so there’s no reprogramming a quantum setup once the holograms have been frozen in place. They also aren’t scalable, at least for the time being. Simple computations would be all they are capable of.
Even so, there’s a need for reliable quantum computing schemes, even very simple ones, and as yet there’s not real technology that’s stepped into that space, Technology Review tells us. So while this kind of thing is pretty nascent, it could be the beginning of something bigger and better as technologies (like OptiGrate) mature.

From popsci

Powerful pixels: Mapping the 'Apollo Zone'

 Mosaic of the near side of the moon as taken by the Clementine star trackers. The images were taken on March 15, 1994

For NASA researchers, pixels are much more – they are precious data that help us understand where we came from, where we've been, and where we're going. 

At NASA's Ames Research Center, Moffett Field, Calif., computer scientists have made a giant leap forward to pull as much information from imperfect static images as possible. With their advancement in image processing algorithms, the legacy data from the Apollo Metric Camera onboard Apollo 15, 16 and 17 can be transformed into an informative and immersive 3D mosaic map of a large and scientifically interesting part of the moon. 

The "Apollo Zone" Digital Image Mosaic (DIM) and Digital Terrain Model (DTM) maps cover about 18 percent of the lunar surface at a resolution of 98 feet (30 meters) per pixel. The maps are the result of three years of work by the Intelligent Robotics Group (IRG) at NASA Ames, and are available to view through the NASA Lunar Mapping and Modeling Portal (LMMP) and Google Moon feature in Google Earth. 

"The main challenge of the Apollo Zone project was that we had very old data – scans, not captured in digital format," said Ara Nefian, a senior scientist with the IRG and Carnegie Mellon University-Silicon Valley. "They were taken with the technology we had over 40 years ago with imprecise camera positions, orientations and exposure time by today’s standards."

The researchers overcame the challenge by developing new computer vision algorithms to automatically generate the 2D and 3D maps. Algorithms are sets of computer code that create a procedure for how to handle certain set processes. For example, part of the 2D imaging algorithms align many images taken from various positions with various exposure times into one seamless image mosaic. In the mosaic, areas in shadows, which show up as patches of dark or black pixels are automatically replaced by lighter gray pixels. These show more well-lit detail from other images of the same area to create a more detailed map. 

 Left: A normal one-camera image of the lunar surface. Right: A composite Apollo Zone image showing the best details from multiple photographs.

"The key innovation that we made was to create a fully automatic image mosaicking and terrain modeling software system for orbital imagery," said Terry Fong, director of IRG. "We have since released this software in several open-source libraries including Ames Stereo Pipeline, Neo-Geography Toolkit and NASA Vision Workbench." 

Lunar imagery of varying coverage and resolution has been released for general use for some time. In 2009, the IRG helped Google develop "Moon in Google Earth", an interactive, 3D atlas of the moon. With "Moon in Google Earth", users can explore a virtual moonscape, including imagery captured by the Apollo, Clementine and Lunar Orbiter missions. 

The Apollo Zone project uses imagery recently scanned at NASA's Johnson Space Center in Houston, Texas, by a team from Arizona State University. The source images themselves are large – 20,000 pixels by 20,000 pixels, and the IRG aligned and processed more than 4,000 of them. To process the maps, they used Ames' Pleiades supercomputer. 

 The color on this map represents the terrain elevation in the Apollo Zone mapped area.

The initial goal of the project was to build large-scale image mosaics and terrain maps to support future lunar exploration. However, the project's progress will have long-lasting technological impacts on many targets of future exploration. "The algorithms are very complex, so they don't yet necessarily apply to things like real time robotics, but they are extremely precise and accurate," said Nefian. "It's a robust technological solution to deal with insufficient data, and qualities like this make it superb for future exploration, such as a reconnaissance or mapping mission to a Near Earth Object." 

Near Earth Objects, or "NEOs" are comets and asteroids that have been attracted by the gravity of nearby planets into orbits in Earth's neighborhood. NEOs are often small and irregular, which makes their paths hard to predict. With these algorithms, even imperfect imagery of a NEO could be transformed into detailed 3D maps to help researchers better understand the shape of it, and how it might travel while in our neighborhood.

In the future, the team plans to expand the use of their algorithms to include imagery taken at angles, rather than just straight down at the surface. A technique called photoclinometry – or "shape from shading" – allows 3D terrain to be reconstructed from a single 2D image by comparing how surfaces sloping toward the sun appear brighter than areas that slope away from it. Also, the team will study imagery not just as pictures, but as physical models that give information about all the factors affect how the final image is depicted.

"As NASA continues to build technologies that will enable future robotic and human exploration, our researchers are looking for new and clever ways to get more out of the data we capture," said Victoria Friedensen, Joint Robotic Precursor Activities manager of the Human Exploration Operations Mission Directorate at NASA Headquarters. "This technology is going to have great benefit for us as we take the next steps."

From physorg

Holographic 3-D looks tantalizingly closer in 2012

Scientists at Imec believe, as do other researchers, that holographic images are the answer toward resolving the eye strain and headaches that go along with present-day 3-D viewing. 


 At Imec, their work involves creating moving pixels. They are constructing holographic displays by shining lasers on microelectromechanical systems (MEMS) platforms that can move up and down like small, reflective pistons.

“Holographic visualization promises to offer a natural 3-D experience for multiple viewers, without the undesirable side-effects of current 3D stereoscopic visualization (uncomfortable glasses, strained eyes, fatiguing experience),” the company states.

In their nanoscale system, they work with chips made by growing a layer of silicon oxide on to silicon wafer. They etch square patches of the silicon oxide. The result is a checkerboard-like pattern where etched-away pixels are nanometers lower than their neighbors. A reflective aluminum coating tops the chip. When laser light shines on the chip, it bounces off of the boundary between adjacent pixels at an angle. Diffracted light interferes constructively and destructively to create a 3-D picture where small mirrored platforms are moving up and down, many times a second, to create a moving projection. The process can also be described as the pixels closer to the light interfering with it one way and those further off, in another. The small distances between them generate the image that the eye sees.

Imec hopes to construct the first, proof-of-concept moving structures by mid-2012. “Imec's vision is to design the ultimate 3D display: a holographic display with a 60° diffraction angle and a high-definition visual experience,” they state.  As such, Imec will have lots of company elsewhere in the race to iron out complexities of holographic imaging. According to reports throughout 2011, research teams aim to make the technology more of a reality than a wish-list item for consumers. 

The BBC's R&D department has identified the work that broadcasters are doing across Europe, for example, in holographic TV. Engineers are also focused on research into 3-D holoscopy for the Internet and other 3-D applications. 

Researchers at MIT this year said they were closing in on holographic TV by building a system with a refresh rate of 15 frames per second. Also earlier this year, the Defense Advanced Research Projects Agency (DARPA) completed a five-year project called “Urban Photonic Sandtable Display” that creates realtime, color, 360-degree 3-D holographic displays.

From physorg

Japan scientists hope slime holds intelligence key

Amoeboid yellow slime mold has been on Earth for thousands of years, living a distinctly un-hi-tech life, but, say scientists, it could provide the key to designing bio-computers capable of solving complex problems.

 Japanese scientists are studying amoeboid yellow slime mold to better understand the mechanism of human intelligence. Researchers say the cells appear to have a kind of information-processing ability that allows them to "optimise" the route along which the mold grows to reach food while avoiding stresses -- like light -- that may damage them.

Toshiyuki Nakagaki, a professor at Future University Hakodate says the organism, which he cultivates in petri dishes, "organises" its cells to create the most direct root through a maze to a source of food.

He says the cells appear to have a kind of information-processing ability that allows them to "optimise" the route along which the mold grows to reach food while avoiding stresses -- like light -- that may damage them.

"Humans are not the only living things with information-processing abilities," said Nakagaki in his laboratory in Hakodate on Japan's northernmost island of Hokkaido.

"Simple creatures can solve certain kinds of difficult puzzles," Nakagaki said. "If you want to spotlight the essence of life or intelligence, it's easier to use these simple creatures."

And it doesn't get much simpler than slime mold, an organism that inhabits decaying leaves and logs and eats bacteria.

Physarum polycephalum, or grape-cluster slime, grows large enough to be seen without a microscope and has the appearance of mayonnaise.

Nakagaki's work with this slime has been recognised with "Ig Nobel" awards in 2008 and 2010. An irreverent take on the Nobel prizes, Ig Nobel prizes are given to scientists who can "first make people laugh, and then make them think."

 Toshiyuki Nakagaki, professor of Future University Hakodate in Japan, says simple creatures such as amoebic slime have information-processing abilities and can solve difficult puzzles.

And, say his contemporaries, slime may sound like an odd place to go looking for the key to intelligence, but it is exactly the right place to start.

Atsushi Tero at Kyushu University in western Japan, said slime mold studies are not a "funny but quite orthodox approach" to figuring out the mechanism of human intelligence.
He says slime molds can create much more effective networks than even the most advanced technology that currently exists.

"Computers are not so good at analysing the best routes that connect many base points because the volume of calculations becomes too large for them," Tero explained.

"But slime molds, without calculating all the possible options, can flow over areas in an impromptu manner and gradually find the best routes.

"Slime molds that have survived for hundreds of millions of years can flexibly adjust themselves to a change of the environment," he said. "They can even create networks that are resistant to unexpected stimulus."

Research has shown slime molds become inactive when subjected to stress such as temperature or humidity changes. They even appear to "remember" the stresses and protectively become inactive when they might expect to experience them.

Tero and his research team have successfully had slime molds form the pattern of a railway system quite similar to the railroad networks of the Kanto region centering Tokyo -- which were designed by hard-thinking people.

He hopes these slime mold networks will be used in future designs of new transport systems or electric transmission lines that need to incorporate detours to get around power outages.

Masashi Aono, a researcher at Riken, a natural science research institute based in Saitama, says his project aims to examine the mechanism of the human brain and eventually duplicate it with slime molds.

"I'm convinced that studying the information-processing capabilities of lower organisms may lead to an understanding of the human brain system," Aono said. "That's my motivation and ambition as a researcher."

Aono says that among applications of so-called "slime mold neuro-computing" is the creation of new algorithm or software for computers modelled after the methods slime molds use when they form networks.

"Ultimately, I'm interested in creating a bio-computer by using actual slime molds, whose information-processing system will be quite close to that of the human brain," Aono said.
"Slime molds do not have a central nervous system, but they can act as if they have intelligence by using the dynamism of their fluxion, which is quite amazing," Aono said. "To me, slime molds are the window on a small universe."

From physorg

'Nanoantennas' Show Promise in Optical Innovations

The researchers at Purdue University used the nanoantennas to abruptly change a property of light called its phase. Light is transmitted as waves analogous to waves of water, which have high and low points. The phase defines these high and low points of light.

 The image in the upper left shows a schematic for an array of gold "plasmonic nanoantennas" able to precisely manipulate light in new ways, a technology that could make possible a range of optical innovations such as more powerful microscopes, telecommunications and computers. At upper right is a scanning electron microscope image of the structures. The figure below shows the experimentally measured refraction angle versus incidence angle for light, demonstrating how the nanoantennas alter the refraction.

"By abruptly changing the phase we can dramatically modify how light propagates, and that opens up the possibility of many potential applications,"said Vladimir Shalaev, scientific director of nanophotonics at Purdue's Birck Nanotechnology Center and a distinguished professor of electrical and computer engineering.

Findings are described in a paper to be published online on Dec. 22 in the journal Science.
The new work at Purdue extends findings by researchers led by Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at the Harvard School of Engineering and Applied Sciences. In that work, described in an October Science paper, Harvard researchers modified Snell's law, a long-held formula used to describe how light reflects and refracts, or bends, while passing from one material into another.
"What they pointed out was revolutionary," Shalaev said.

Until now, Snell's law has implied that when light passes from one material to another there are no abrupt phase changes along the interface between the materials. Harvard researchers, however, conducted experiments showing that the phase of light and the propagation direction can be changed dramatically by using new types of structures called metamaterials, which in this case were based on an array of antennas.

The Purdue researchers took the work a step further, creating arrays of nanoantennas and changing the phase and propagation direction of light over a broad range of near-infrared light. The paper was written by doctoral students Xingjie Ni and Naresh K. Emani, principal research scientist Alexander V. Kildishev, assistant professor Alexandra Boltasseva, and Shalaev.

The wavelength size manipulated by the antennas in the Purdue experiment ranges from 1 to 1.9 microns.

"The near infrared, specifically a wavelength of 1.5 microns, is essential for telecommunications," Shalaev said. "Information is transmitted across optical fibers using this wavelength, which makes this innovation potentially practical for advances in telecommunications."

The Harvard researchers predicted how to modify Snell's law and demonstrated the principle at one wavelength.

"We have extended the Harvard team's applications to the near infrared, which is important, and we also showed that it's not a single frequency effect, it's a very broadband effect," Shalaev said. "Having a broadband effect potentially offers a range of technological applications."

The innovation could bring technologies for steering and shaping laser beams for military and communications applications, nanocircuits for computers that use light to process information, and new types of powerful lenses for microscopes.

Critical to the advance is the ability to alter light so that it exhibits "anomalous" behavior: notably, it bends in ways not possible using conventional materials by radically altering its refraction, a process that occurs as electromagnetic waves, including light, bend when passing from one material into another.

Scientists measure this bending of radiation by its "index of refraction." Refraction causes the bent-stick-in-water effect, which occurs when a stick placed in a glass of water appears bent when viewed from the outside. Each material has its own refraction index, which describes how much light will bend in that particular material. All natural materials, such as glass, air and water, have positive refractive indices.

However, the nanoantenna arrays can cause light to bend in a wide range of angles including negative angles of refraction.

"Importantly, such dramatic deviation from the conventional Snell's law governing reflection and refraction occurs when light passes through structures that are actually much thinner than the width of the light's wavelengths, which is not possible using natural materials," Shalaev said. "Also, not only the bending effect, refraction, but also the reflection of light can be dramatically modified by the antenna arrays on the interface, as the experiments showed."

The nanoantennas are V-shaped structures made of gold and formed on top of a silicon layer. They are an example of metamaterials, which typically include so-called plasmonic structures that conduct clouds of electrons called plasmons. The antennas themselves have a width of 40 nanometers, or billionths of a meter, and researchers have demonstrated they are able to transmit light through an ultrathin "plasmonic nanoantenna layer" about 50 times smaller than the wavelength of light it is transmitting.

"This ultrathin layer of plasmonic nanoantennas makes the phase of light change strongly and abruptly, causing light to change its propagation direction, as required by the momentum conservation for light passing through the interface between materials," Shalaev said.

The work has been funded by the U.S. Air Force Office of Scientific Research and the National Science Foundation's Division of Materials Research.

From sciencedaily

Computer Assisted Design (CAD) for RNA: Researchers Develop CAD-Type Tools for Engineering RNA Control Systems

"Because biological systems exhibit functional complexity at multiple scales, a big question has been whether effective design tools can be created to increase the sizes and complexities of the microbial systems we engineer to meet specific needs," says Jay Keasling, director of JBEI and a world authority on synthetic biology and metabolic engineering. "Our work establishes a foundation for developing CAD platforms to engineer complex RNA-based control systems that can process cellular information and program the expression of very large numbers of genes. Perhaps even more importantly, we have provided a framework for studying RNA functions and demonstrated the potential of using biochemical and biophysical modeling to develop rigorous design-driven engineering strategies for biology."

 JBEI researchers have developed CAD-type tools for engineering RNA components that hold enormous potential for microbial-based production of advanced biofuels and other goods now derived from petrochemicals.

Keasling, who also holds appointments with the Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkley, is the corresponding author of a paper in the journal Science that describes this work. The paper is titled "Model-driven engineering of RNA devices to quantitatively-program gene expression." Other co-authors are James Carothers, Jonathan Goler and Darmawi Juminaga.

Synthetic biology is an emerging scientific field in which novel biological devices, such as molecules, genetic circuits or cells, are designed and constructed, or existing biological systems, such as microbes, are re-designed and engineered. A major goal is to produce valuable chemical products from simple, inexpensive and renewable starting materials in a sustainable manner. As with other engineering disciplines, CAD tools for simulating and designing global functions based upon local component behaviors are essential for constructing complex biological devices and systems. However, until this work, CAD-type models and simulation tools for biology have been very limited.
Identifying the relevant design parameters and defining the domains over which expected component behaviors are exerted have been key steps in the development of CAD tools for other engineering disciplines," says Carothers, a bioengineer and lead author of the Science paper who is a member of Keasling's research groups with both JBEI and the California Institute for Quantitative Biosciences. "We've applied generalizable engineering strategies for managing functional complexity to develop CAD-type simulation and modeling tools for designing RNA-based genetic control systems. Ultimately we'd like to develop CAD platforms for synthetic biology that rival the tools found in more established engineering disciplines, and we see this work as an important technical and conceptual step in that direction."

Keasling, Carothers and their co-authors focused their design-driven approach on RNA sequences that can fold into complicated three dimensional shapes, called ribozymes and aptazymes. Like proteins, ribozymes and aptazymes can bind metabolites, catalyze reactions and act to control gene expression in bacteria, yeast and mammalian cells. Using mechanistic models of biochemical function and kinetic biophysical simulations of RNA folding, ribozyme and aptazyme devices with quantitatively predictable functions were assembled from components that were characterized in vitro, in vivo and in silico. The models and design strategy were then verified by constructing 28 genetic expression devices for the Escherichia coli bacterium. When tested, these devices showed excellent agreement -- 94-percent correlation -- between predicted and measured gene expression levels.

"We needed to formulate models that would be sophisticated enough to capture the details required for simulating system functions, but simple enough to be framed in terms of measurable and tunable component characteristics or design variables," Carothers says. "We think of design variables as the parts of the system that can be predictably modified, in the same way that a chemical engineer might tune the operation of a chemical plant by turning knobs that control fluid flow through valves. In our case, knob-turns are represented by specific kinetic terms for RNA folding and ribozyme catalysis, and our models are needed to tell us how a combination of these knob-turns will affect overall system function."

JBEI researchers are now using their RNA CAD-type models and simulations as well as the ribozyme and aptazyme devices they constructed to help them engineer metabolic pathways that will increase microbial fuel production. JBEI is one of three DOE Bioenergy Research Centers established by DOE's Office of Science to advance the technology for the commercial production of clean, green and renewable biofuels. A key to JBEI's success will be the engineering of microbes that can digest lignocellulosic biomass and synthesize from the sugars transportation fuels that can replace gasoline, diesel and jet fuels in today's engines.

"In addition to advanced biofuels, we're also looking into engineering microbes to produce chemicals from renewable feedstocks that are difficult to produce cheaply and in high yield using traditional organic chemistry technology," Carothers says.

While the RNA models and simulations developed at JBEI to date fall short of being a full-fledged RNA CAD platform, Keasling, Carothers and their coauthors are moving towards that goal.

"We are also actively trying to make our models and simulations more accessible to researchers who may not want to become RNA control system experts but would nonetheless like to use our approach and RNA devices in their own work," Carothers says.

While the work at JBEI focused on E. coli and the microbial production of advanced biofuels, the authors of the Science paper believe that their concepts could also be used for programming function into mammalian systems and cells.

"We recently initiated a research project to investigate how we can use our approach to engineer RNA-based genetic control systems that will increase the safety and efficacy of regenerative medicine therapies that use cultured stem cells to treat diseases such as diabetes and Parkinson's," Carothers says.

This research was supported in part by grants from the DOE Office of Science through JBEI, and the National Science Foundation through the Synthetic Biology Engineering Research Center (SynBERC).

From sciencedaily

More Powerful Supercomputers? New Device Could Bring Optical Information Processing

The "passive optical diode" is made from two tiny silicon rings measuring 10 microns in diameter, or about one-tenth the width of a human hair. Unlike other optical diodes, it does not require external assistance to transmit signals and can be readily integrated into computer chips.

 This illustration shows a new "all-silicon passive optical diode," a device small enough to fit millions on a computer chip that could lead to faster, more powerful information processing and supercomputers. The device has been developed by Purdue University researchers.

The diode is capable of "nonreciprocal transmission," meaning it transmits signals in only one direction, making it capable of information processing, said Minghao Qi (pronounced Chee), an associate professor of electrical and computer engineering at Purdue University.

"This one-way transmission is the most fundamental part of a logic circuit, so our diodes open the door to optical information processing," said Qi, working with a team also led by Andrew Weiner, Purdue's Scifres Family Distinguished Professor of Electrical and Computer Engineering.

The diodes are described in a paper to be published online Dec. 22 in the journal Science. The paper was written by graduate students Li Fan, Jian Wang, Leo Varghese, Hao Shen and Ben Niu, research associate Yi Xuan, and Weiner and Qi.

Although fiberoptic cables are instrumental in transmitting large quantities of data across oceans and continents, information processing is slowed and the data are susceptible to cyberattack when optical signals must be translated into electronic signals for use in computers, and vice versa.

"This translation requires expensive equipment," Wang said. "What you'd rather be able to do is plug the fiber directly into computers with no translation needed, and then you get a lot of bandwidth and security."

Electronic diodes constitute critical junctions in transistors and help enable integrated circuits to switch on and off and to process information. The new optical diodes are compatible with industry manufacturing processes for complementary metal-oxide-semiconductors, or CMOS, used to produce computer chips, Fan said.

"These diodes are very compact, and they have other attributes that make them attractive as a potential component for future photonic information processing chips," she said.

The new optical diodes could make for faster and more secure information processing by eliminating the need for this translation. The devices, which are nearly ready for commercialization, also could lead to faster, more powerful supercomputers by using them to connect numerous processors together.

"The major factor limiting supercomputers today is the speed and bandwidth of communication between the individual superchips in the system," Varghese said. "Our optical diode may be a component in optical interconnect systems that could eliminate such a bottleneck."

Infrared light from a laser at telecommunication wavelength goes through an optical fiber and is guided by a microstructure called a waveguide. It then passes sequentially through two silicon rings and undergoes "nonlinear interaction" while inside the tiny rings. Depending on which ring the light enters first, it will either pass in the forward direction or be dissipated in the backward direction, making for one-way transmission. The rings can be tuned by heating them using a "microheater," which changes the wavelengths at which they transmit, making it possible to handle a broad frequency range.

From sciencedaily

Chemists Solve an 84-Year-Old Theory On How Molecules Move Energy After Light Absorption

Conservation of angular momentum is a fundamental property of nature, one that astronomers use to detect the presence of satellites circling distant planets. In 1927, it was proposed that this principle should apply to chemical reactions, but a clear demonstration has never been achieved.

 MSU chemist Jim McCusker and postdoctoral researcher Dong Guo proved an 84-year-old theory.

In the current issue of Science, MSU chemist Jim McCusker demonstrates for the first time the effect is real and also suggests how scientists could use it to control and predict chemical reaction pathways in general.

"The idea has floated around for decades and has been implicitly invoked in a variety of contexts, but no one had ever come up with a chemical system that could demonstrate whether or not the underlying concept was valid," McCusker said. "Our result not only validates the idea, but it really allows us to start thinking about chemical reactions from an entirely different perspective."

The experiment involved the preparation of two closely related molecules that were specifically designed to undergo a chemical reaction known as fluorescence resonance energy transfer, or FRET. Upon absorption of light, the system is predisposed to transfer that energy from one part of the molecule to another.

McCusker's team changed the identity of one of the atoms in the molecule from chromium to cobalt. This altered the molecule's properties and shut down the reaction. The absence of any detectable energy transfer in the cobalt-containing compound confirmed the hypothesis.

"What we have successfully conducted is a proof-of-principle experiment," McCusker said. "One can easily imagine employing these ideas to other chemical processes, and we're actually exploring some of these avenues in my group right now."

The researchers believe their results could impact a variety of fields including molecular electronics, biology and energy science through the development of new types of chemical reactions.

Dong Guo, a postdoctoral researcher, and Troy Knight, former graduate student and now research scientist at Dow Chemical, were part of McCusker's team. Funding was provided by the National Science Foundation.

From sciencedaily

LHC May Have Revealed First Hints of Higgs

Finally, physicists may have gotten a long-awaited prize with the latest data release from the Large Hadron Collider on Dec. 13, which show a possible signal for the elusive Higgs boson at around 125 gigaelectronvolts (GeV).



Two separate experiments confirm a small rise in the number of certain particle decay events occurring in a particular energy range. This could be a sign of the Higgs particle, which is a manifestation of the Higgs field required to give subatomic particles their mass.

The ATLAS experiment sees a signal consistent with a 126 GeV Higgs while the CMS collaboration reports an excess of events at 124 GeV. (A hydrogen atom is approximately 1 GeV, so if this were the Higgs particle it would be roughly equivalent to the mass of a cesium atom.) Even if this signal is not from the Higgs, both experiments narrowed down the range in which the Higgs particle could possibly show up, leaving only a small window between approximately 115 and 130 GeV.

“It’s getting very exciting. We are stepping into an interesting territory and we are starting to see some bumps there,” said physicist Greg Landsberg from Brown University in Providence, Rhode Island, who is a team member of the CMS group.

Even more exciting, a Higgs in this mass range would likely require new physics beyond the Standard Model — which describes the interactions of all known subatomic particles and forces –- in order to be stable. One possible extension, known as supersymmetry, posits the existence of a heavier partner to all known subatomic particles in order to solve certain problems with the Standard Model.
But physicists’ long wait for the Higgs may not quite be over.

As yet, the findings are “not very significant, and at best 50-50 (probably worse) that it is real,” wrote physicist Matt Strassler of Rutgers University, who was not involved with the work, in an e-mail. The observation is not much more than a “vague hint, and it is neither clear nor convincing.”
While both experiments see a similar signal, the observed particle decay events could have occurred by chance so this isn’t yet a discovery.

Next year, experiments will roughly quadruple the LHC dataset, giving an additional 15 percent boost in terms of the quality and power of the data, said Landsberg.

As mathematician Peter Woit of Columbia University wrote on his blog the day before the announcement, “One thing that can be predicted with certainty is a flood of papers from theorists claiming that their favorite model predicts this particular Higgs mass.”

From wired

Trillion-Frame-Per-Second Video: Researchers Have Created an Imaging System That Makes Light Look Slow

Media Lab postdoc Andreas Velten, one of the system's developers, calls it the "ultimate" in slow motion: "There's nothing in the universe that looks fast to this camera," he says.

The system relies on a recent technology called a streak camera, deployed in a totally unexpected way. The aperture of the streak camera is a narrow slit. Particles of light -- photons -- enter the camera through the slit and pass through an electric field that deflects them in a direction perpendicular to the slit. Because the electric field is changing very rapidly, it deflects late-arriving photons more than it does early-arriving ones.

 One of the things that distinguishes the researchers' new system from earlier high-speed imaging systems is that it can capture light 'scattering' below the surfaces of solid objects, such as the tomato depicted here.

The image produced by the camera is thus two-dimensional, but only one of the dimensions -- the one corresponding to the direction of the slit -- is spatial. The other dimension, corresponding to the degree of deflection, is time. The image thus represents the time of arrival of photons passing through a one-dimensional slice of space.

The camera was intended for use in experiments where light passes through or is emitted by a chemical sample. Since chemists are chiefly interested in the wavelengths of light that a sample absorbs, or in how the intensity of the emitted light changes over time, the fact that the camera registers only one spatial dimension is irrelevant.

But it's a serious drawback in a video camera. To produce their super-slow-mo videos, Velten, Media Lab Associate Professor Ramesh Raskar and Moungi Bawendi, the Lester Wolfe Professor of Chemistry, must perform the same experiment -- such as passing a light pulse through a bottle -- over and over, continually repositioning the streak camera to gradually build up a two-dimensional image. Synchronizing the camera and the laser that generates the pulse, so that the timing of every exposure is the same, requires a battery of sophisticated optical equipment and exquisite mechanical control. It takes only a nanosecond -- a billionth of a second -- for light to scatter through a bottle, but it takes about an hour to collect all the data necessary for the final video. For that reason, Raskar calls the new system "the world's slowest fastest camera."

Doing the math
After an hour, the researchers accumulate hundreds of thousands of data sets, each of which plots the one-dimensional positions of photons against their times of arrival. Raskar, Velten and other members of Raskar's Camera Culture group at the Media Lab developed algorithms that can stitch that raw data into a set of sequential two-dimensional images.
The streak camera and the laser that generates the light pulses -- both cutting-edge devices with a cumulative price tag of $250,000 -- were provided by Bawendi, a pioneer in research on quantum dots: tiny, light-emitting clusters of semiconductor particles that have potential applications in quantum computing, video-display technology, biological imaging, solar cells and a host of other areas.

The trillion-frame-per-second imaging system, which the researchers have presented both at the Optical Society's Computational Optical Sensing and Imaging conference and at Siggraph, is a spinoff of another Camera Culture project, a camera that can see around corners. That camera works by bouncing light off a reflective surface -- say, the wall opposite a doorway -- and measuring the time it takes different photons to return. But while both systems use ultrashort bursts of laser light and streak cameras, the arrangement of their other optical components and their reconstruction algorithms are tailored to their disparate tasks.

Because the ultrafast-imaging system requires multiple passes to produce its videos, it can't record events that aren't exactly repeatable. Any practical applications will probably involve cases where the way in which light scatters -- or bounces around as it strikes different surfaces -- is itself a source of useful information. Those cases may, however, include analyses of the physical structure of both manufactured materials and biological tissues -- "like ultrasound with light," as Raskar puts it.

As a longtime camera researcher, Raskar also sees a potential application in the development of better camera flashes. "An ultimate dream is, how do you create studio-like lighting from a compact flash? How can I take a portable camera that has a tiny flash and create the illusion that I have all these umbrellas, and sport lights, and so on?" asks Raskar, the NEC Career Development Associate Professor of Media Arts and Sciences. "With our ultrafast imaging, we can actually analyze how the photons are traveling through the world. And then we can recreate a new photo by creating the illusion that the photons started somewhere else."

"It's very interesting work. I am very impressed," says Nils Abramson, a professor of applied holography at Sweden's Royal Institute of Technology. In the late 1970s, Abramson pioneered a technique called light-in-flight holography, which ultimately proved able to capture images of light waves at a rate of 100 billion frames per second.

But as Abramson points out, his technique requires so-called coherent light, meaning that the troughs and crests of the light waves that produce the image have to line up with each other. "If you happen to destroy the coherence when the light is passing through different objects, then it doesn't work," Abramson says. "So I think it's much better if you can use ordinary light, which Ramesh does."

Indeed, Velten says, "As photons bounce around in the scene or inside objects, they lose coherence. Only an incoherent detection method like ours can see those photons." And those photons, Velten says, could let researchers "learn more about the material properties of the objects, about what is under their surface and about the layout of the scene. Because we can see those photons, we could use them to look inside objects -- for example, for medical imaging, or to identify materials."
"I'm surprised that the method I've been using has not been more popular," Abramson adds. "I've felt rather alone. I'm very glad that someone else is doing something similar. Because I think there are many interesting things to find when you can do this sort of study of the light itself."

From sciencedaily

'Matrix'-Style Effortless Learning? Vision Scientists Demonstrate Innovative Learning Method

Experiments conducted at Boston University (BU) and ATR Computational Neuroscience Laboratories in Kyoto, Japan, recently demonstrated that through a person's visual cortex, researchers could use decoded functional magnetic resonance imaging (fMRI) to induce brain activity patterns to match a previously known target state and thereby improve performance on visual tasks.

 In the future, a person may be able to watch a computer screen and have his or her brain patterns modified to improve physical or mental performance. Researchers say an innovative learning method that uses decoded functional magnetic resonance imaging could modify brain activities to help people recuperate from an accident or disease, learn a new language or even fly a plane.

Think of a person watching a computer screen and having his or her brain patterns modified to match those of a high-performing athlete or modified to recuperate from an accident or disease. Though preliminary, researchers say such possibilities may exist in the future.

"Adult early visual areas are sufficiently plastic to cause visual perceptual learning," said lead author and BU neuroscientist Takeo Watanabe of the part of the brain analyzed in the study.

Neuroscientists have found that pictures gradually build up inside a person's brain, appearing first as lines, edges, shapes, colors and motion in early visual areas. The brain then fills in greater detail to make a red ball appear as a red ball, for example.

Researchers studied the early visual areas for their ability to cause improvements in visual performance and learning.

"Some previous research confirmed a correlation between improving visual performance and changes in early visual areas, while other researchers found correlations in higher visual and decision areas," said Watanabe, director of BU's Visual Science Laboratory. "However, none of these studies directly addressed the question of whether early visual areas are sufficiently plastic to cause visual perceptual learning." Until now.

Boston University post-doctoral fellow Kazuhisa Shibata designed and implemented a method using decoded fMRI neurofeedback to induce a particular activation pattern in targeted early visual areas that corresponded to a pattern evoked by a specific visual feature in a brain region of interest. The researchers then tested whether repetitions of the activation pattern caused visual performance improvement on that visual feature.

The result, say researchers, is a novel learning approach sufficient to cause long-lasting improvement in tasks that require visual performance.

What's more, the approached worked even when test subjects were not aware of what they were learning.

"The most surprising thing in this study is that mere inductions of neural activation patterns corresponding to a specific visual feature led to visual performance improvement on the visual feature, without presenting the feature or subjects' awareness of what was to be learned," said Watanabe, who developed the idea for the research project along with Mitsuo Kawato, director of ATR lab and Yuka Sasaki, an assistant in neuroscience at Massachusetts General Hospital.

"We found that subjects were not aware of what was to be learned while behavioral data obtained before and after the neurofeedback training showed that subjects' visual performance improved specifically for the target orientation, which was used in the neurofeedback training," he said.

The finding brings up an inevitable question. Is hypnosis or a type of automated learning a potential outcome of the research?

"In theory, hypnosis or a type of automated learning is a potential outcome," said Kawato. "However, in this study we confirmed the validity of our method only in visual perceptual learning. So we have to test if the method works in other types of learning in the future. At the same time, we have to be careful so that this method is not used in an unethical way."

At present, the decoded neurofeedback method might be used for various types of learning, including memory, motor and rehabilitation.

The National Science Foundation, the National Institutes of Health and the Ministry of Education, Culture, Sports, Science and Technology in Japan supported the research.

From sciencedaily

NASA developing comet harpoon for sample return

This is an artist's concept of a comet harpoon embedded in a comet. The harpoon tip has been rendered semi-transparent so the sample collection chamber inside can be seen.

Scientists at NASA's Goddard Space Flight Center in Greenbelt, Md. are in the early stages of working out the best design for a sample-collecting comet harpoon. In a lab the size of a large closet stands a metal ballista (large crossbow) nearly six feet tall, with a bow made from a pair of truck leaf springs and a bow string made of steel cable 1/2 inch thick. The ballista is positioned to fire vertically downward into a bucket of target material. For safety, it's pointed at the floor, because it could potentially launch test harpoon tips about a mile if it was angled upwards. An electric winch mechanically pulls the bow string back to generate a precise level of force, up to 1,000 pounds, firing projectiles to velocities upwards of 100 feet per second.

Donald Wegel of NASA Goddard, lead engineer on the project, places a test harpoon in the bolt carrier assembly, steps outside the lab and moves a heavy wooden safety door with a thick plexiglass window over the entrance. After dialing in the desired level of force, he flips a switch and, after a few-second delay, the crossbow fires, launching the projectile into a 55-gallon drum full of cometary simulant -- sand, salt, pebbles or a mixture of each. The ballista produces a uniquely impressive thud upon firing, somewhere between a rifle and a cannon blast.

"We had to bolt it to the floor, because the recoil made the whole testbed jump after every shot," said Wegel. "We're not sure what we'll encounter on the comet – the surface could be soft and fluffy, mostly made up of dust, or it could be ice mixed with pebbles, or even solid rock. Most likely, there will be areas with different compositions, so we need to design a harpoon that's capable of penetrating a reasonable range of materials. The immediate goal though, is to correlate how much energy is required to 

penetrate different depths in different materials. What harpoon tip geometries penetrate specific materials best? How does the harpoon mass and cross section affect penetration? The ballista allows us to safely collect this data and use it to size the cannon that will be used on the actual mission." 

 This is a demonstration of the sample collection chamber.

Comets are frozen chunks of ice and dust left over from our solar system's formation. As such, scientists want a closer look at them for clues to the origin of planets and ultimately, ourselves. "One of the most inspiring reasons to go through the trouble and expense of collecting a comet sample is to get a look at the 'primordial ooze' – biomolecules in comets that may have assisted the origin of life," says Wegel.

Scientists at the Goddard Astrobiology Analytical Laboratory have found amino acids in samples of comet Wild 2 from NASA's Stardust mission, and in various carbon-rich meteorites. Amino acids are the building blocks of proteins, the workhorse molecules of life, used in everything from structures like hair to enzymes, the catalysts that speed up or regulate chemical reactions. The research gives support to the theory that a "kit" of ready-made parts created in space and delivered to Earth by meteorite and comet impacts gave a boost to the origin of life.

Although ancient comet impacts could have helped create life, a present-day hit near a populated region would be highly destructive, as a comet's large mass and high velocity would make it explode with many times the force of a typical nuclear bomb. One plan to deal with a comet headed towards Earth is to deflect it with a large – probably nuclear – explosion. However, that might turn out to be a really bad idea. Depending on the comet's composition, such an explosion might just fragment it into many smaller pieces, with most still headed our way. It would be like getting hit with a shotgun blast instead of a rifle bullet. So the second major reason to sample comets is to characterize the impact threat, according to Wegel. We need to understand how they're made so we can come up with the best way to deflect them should any have their sights on us.

"Bringing back a comet sample will also let us analyze it with advanced instruments that won't fit on a spacecraft or haven't been invented yet," adds Dr. Joseph Nuth, a comet expert at NASA Goddard and lead scientist on the project.

This is a photo of the ballista testbed preparing to fire a prototype harpoon into a bucket of material that simulates a comet.
 
Of course, there are other ways to gather a sample, like using a drill. However, any mission to a comet has to overcome the challenge of operating in very low gravity. Comets are small compared to planets, typically just a few miles across, so their gravity is correspondingly weak, maybe a millionth that of Earth, according to Nuth. "A spacecraft wouldn't actually land on a comet; it would have to attach itself somehow, probably with some kind of harpoon. So we figured if you have to use a harpoon anyway, you might as well get it to collect your sample," says Nuth. Right now, the team is working out the best tip design, cross-section, and explosive powder charge for the harpoon, using the crossbow to fire tips at various speeds into different materials like sand, ice, and rock salt. They are also developing a sample collection chamber to fit inside the hollow tip. "It has to remain reliably open as the tip penetrates the comet's surface, but then it has to close tightly and detach from the tip so the sample can be pulled back into the spacecraft," says Wegel. "Finding the best design that will package into a very small cross section and successfully collect a sample from the range of possible materials we may encounter is an enormous challenge."

"You can't do this by crunching numbers in a computer, because nobody has done it before -- the data doesn't exist yet," says Nuth. "We need to get data from experiments like this before we can build a computer model. We're working on answers to the most basic questions, like how much powder charge do you need so your harpoon doesn't bounce off or go all the way through the comet. We want to prove the harpoon can penetrate deep enough, collect a sample, decouple from the tip, and retract the sample collection device."

The spacecraft will probably have multiple sample collection harpoons with a variety of powder charges to handle areas on a comet with different compositions, according to the team. After they have finished their proof-of-concept work, they plan to apply for funding to develop an actual instrument. "Since instrument development is more expensive, we need to show it works first," says Nuth.

Currently, the European Space Agency is sending a mission called Rosetta that will use a harpoon to grapple a probe named Philae to the surface of comet "67P/Churyumov-Gerasimenko" in 2014 so that a suite of instruments can analyze the regolith. "The Rosetta harpoon is an ingenious design, but it does not collect a sample," says Wegel. "We will piggyback on their work and take it a step further to include a sample-collecting cartridge. It's important to understand the complex internal friction encountered by a hollow, core-sampling harpoon."

NASA's recently-funded mission to return a sample from an asteroid, called OSIRIS-REx (Origins, Spectral Interpretation, Resource Identification, Security -- Regolith Explorer), will gather surface material using a specialized collector. However, the surface can be altered by the harsh environment of space. "The next step is to return a sample from the subsurface because it contains the most primitive and pristine material," said Wegel.

Both Rosetta and OSIRIS-REx will significantly increase our ability to navigate to, rendezvous with, and locate specific interesting regions on these foreign bodies. The fundamental research on harpoon-based sample retrieval by Wegel and his team is necessary so the technology is available in time for a subsurface sample return mission.


From physorg

Fluorescent Protein Lights Up the Inner Workings of the Brain

Interactions between neurons involve both chemical and electrical signaling. For decades, neuroscientists have searched for a noninvasive way to measure the electrical component. Achieving this could make it easier to study how the brain works, and how neurological disease impairs its functioning.

 Light up: Applying voltage to the neurons shown here caused an increase in fluorescence.

One promising approach is tracking neuronal electrical activity with fluorescence, which can be integrated into cells fairly easily through genetics or by being attached to antibodies, but which can be toxic and slow to work. Last week, researchers introduced a new candidate—a fluorescent protein from a Dead Sea microbe—that appears to be better equipped for the challenge.

The protein, called archaerhodopsin-3, or Arch, was discovered more than 10 years ago, but scientists are just now starting to realize its potential as a research tool. In a study published last year, researchers used light to trigger an electrical response from Arch that silenced overactive neurons—an approach that could lead to new therapeutics for epilepsy and other seizure disorders.
In this study, the researchers took the opposite tack and used electricity to elicit changes in Arch's fluorescence. The approach could lead to more accurate methods for recording electrical signals from the brain.

The results, published in Nature Methods, indicate that Arch could be the noninvasive voltage sensor neuroscientists have been looking for: It's not toxic to cells, and it's sensitive and fast enough to pick up the rapid electrical changes that accompany neuronal activity. 

"It looks order of magnitudes better than any of the other optical imaging methods I've seen before," says Darcy Peterka, a neuroscientist at Columbia University who was not involved with the study.
The standard method for recording electrical activity in neurons in cell culture—which involves sticking an electrode into the cell—remains the most accurate for measuring voltage at a single point in the cell. But puncturing a neuron with an electrode eventually kills it, whereas Arch would let researchers follow the electrical signal as it propagates throughout the cell. It would also allow researchers to record from the same cell again and again, allowing for long-term experiments that would not be possible with the standard method.

"It really depends on what scientific questions you're trying to answer," says Adam Cohen, a biophysics researcher at Harvard University and the lead author of the new study.

The study was conducted in cultured mouse neurons, but Cohen and his colleagues plan to use Arch to measure neuronal activity in live animals, starting with simple organisms, such as the zebrafish and the worm C. elegans. One advantage of these animals is that they're transparent, making it easy to see the fluorescent signal through a microscope. 

Arch could also prove useful for imaging electrical signals in the mammalian brain, especially for experiments in mice, which could be genetically engineered to express the protein in specific neurons or at specific times in development, for example. 

The challenge of transferring the approach to animals is making sure the fluorescent signal stays strong and consistent. "In the living brain, light gets absorbed—for example, by blood—so you lose light," says Ed Boyden, the researcher at MIT who led the study that used Arch to silence neurons.
The fluorescence given off by Arch also isn't as bright as some of the other available dyes, but its low toxicity makes this less of a concern, because researchers could compensate by using higher concentrations. "The fact that they got it to work well in mouse neurons bodes well," says Peterka.

By Erica Westly
From Technology Review

Gasoline Fuel Cell Would Boost Electric Car Range

If you want to take an electric car on a long drive, you need a gas-powered generator, like the one in the Chevrolet Volt, to extend its range. The problem is that when it's running on the generator, it's no more efficient than a conventional car. In fact, it's even less efficient, because it has a heavy battery pack to lug around.

 Gas guzzler: The fuel cell developed at the University of Maryland.

Now researchers at the University of Maryland have made a fuel cell that could provide a far more efficient alternative to a gasoline generator. Like all fuel cells, it generates electricity through a chemical reaction, rather than by burning fuel, and can be twice as efficient at generating electricity as a generator that uses combustion.

The researchers' fuel cell is a greatly improved version of a type that has a solid ceramic electrolyte, and is known as a solid-oxide fuel cell. Unlike the hydrogen fuel cells typically used in cars, solid-oxide fuel cells can run on a variety of readily available fuels, including diesel, gasoline, and natural gas. They've been used for generating power for buildings, but they've been considered impractical for use in cars because they're far too big and because they operate at very high temperatures—typically at about 900 ⁰C.

By developing new electrolyte materials and changing the cell's design, the researchers made a fuel cell that is much more compact. It can produce 10 times as much power, for its size, as a conventional one, and could be smaller than a gasoline engine while producing as much power.

The researchers have also lowered the temperature at which the fuel cell operates by hundreds of degrees, which will allow them to use cheaper materials. "It's a huge difference in cost," says Eric Wachsman, director of the University of Maryland Energy Research Center, who led the research. He says the researchers have identified simple ways to improve the power output and reduce the temperature further still, using methods that are already showing promising results it the lab. These advances could bring costs to a point that they are competitive with gasoline engines. Wachsman says he's in the early stages of starting a company to commercialize the technology.

Wachsman's fuel cells currently operate at 650 ⁰C, and his goal is to bring that down to 350 ⁰C for use in cars. Insulating the fuel cells isn't difficult since they're small—a fuel cell stack big enough to power a car would only need to be 10 centimeters on a side. High temperatures are a bigger problem because they make it necessary to use expensive, heat-resistant materials within the device, and because heating the cell to operating temperatures takes a long time. By bringing the temperatures down, Wachsman can use cheaper materials and decrease the amount of time it takes the cell to start.
Even with these advances, the fuel cell wouldn't come on instantly, and turning it on and off with every short trip in the car would cause a lot of wear and tear, reducing its lifetime. Instead, it would be paired with a battery pack, as a combustion engine is in the Volt, Wachsman says. The fuel cell could then run more steadily, serving to keep the battery topped without providing bursts of acceleration.

The researchers achieved their result largely by modifying the solid electrolyte material at the core of a solid-oxide fuel cell. In fuel cells on the market, such as one made by Bloom Energy, the electrolyte has to be made thick enough to provide structural support. But the thickness of the electrolyte limits power generation. Over the last several years, researchers have been developing designs that don't require the electrolyte to support the cell so they can make the electrolyte thinner and achieve high power output at lower temperatures. The University of Maryland researchers took this a step further by developing new multilayered electrolytes that increase the power output still more.

The work is part of a larger U.S. Department of Energy effort, over the past decade, to make solid-oxide fuel cells practical. The first fruits of that effort likely won't be fuel cells in cars—so far, Wachsman has only made relatively small fuel cells, and significant engineering work remains to be done. The first applications of solid oxide fuels in vehicles may be on long-haul trucks with sleeper cabs.

Equipment suppliers such as Delphi and Cummins are developing fuel cells that can power the air conditioners, TVs, and microwaves inside the cabs, potentially cutting fuel consumption by 85 percent compared to idling the truck's engine. The Delphi system also uses a design that allows for a thinner electrolyte, but it operates at higher temperatures than Wachsman's fuel cell. The fuel cell could be turned on Monday, and left to run at low rates all week and still get the 85 percent reduction. Delphi has built a prototype and plans to demonstrate its system on a truck next year. 

By Kevin Bullis
From Technology Review

IBM Makes Revolutionary Racetrack Memory Using Existing Tools

IBM has shown that a revolutionary new type of computer memory—one that combines the large capacity of traditional hard disks with the speed and robustness of flash memory—can be made with standard chip-making tools. 

 Memory milestone: These nanowires are part of a prototype chip for a novel form of data storage that could fit more information into a smaller space than today’s technology.


The work is important because the cost and complexity of manufacturing fundamentally new computer components can often derail their development.

IBM researchers first described their vision for "racetrack" computer memory in 2008. Today, at the International Electronic Devices Meeting in Washington, D.C., they unveiled the first prototype that combines on one chip all the components racetrack memory needs to read, store, and write data. The chip was fabricated using standard semiconductor manufacturing tools.

Racetrack memory stores data on nanoscale metal wires. Bits of information—digital 1s and 0s—are represented by magnetic stripes in those nanowires, which are created by controlling the magnetic orientation of different parts of the wire. 

Writing data involves inserting a new magnetic stripe into a nanowire by applying current to it; reading data involves moving the stripes along the nanowire past a device able to detect the boundaries between stripes.

Earlier demonstrations of the technology employed nanowires on a silicon wafer in a specialized research machine, with other components of the memory attached separately. "All the circuits were separate from the chip with the nanowires on," says Stuart Parkin, who first conceived of racetrack memory and leads IBM's research on the technology at its research lab in Almaden, California. "Now we've been able to make the first integrated version with everything on one piece of silicon."

The new racetrack prototype was made at IBM's labs in Yorktown, New York, using a manufacturing technique known as CMOS, which is widely used to make processors and various semiconductor components. This proves that it should be feasible to make racetrack memory commercially, says Parkin, although much refinement is still needed. 

The nickel-iron nanowires at the heart of the prototype were made by depositing a complete layer of metal onto an area of the wafer, and then etching away material to leave the nanowires behind. 
The wires are approximately 10 micrometers long, 150 nanometers wide, and 20 nanometers thick. One end of each nanowire is connected to circuits that deliver pulses of electrons with carefully controlled quantum-mechanica­l "spin" to write data into the nanowire as magnetic stripes. The other end of each nanowire has additional layers patterned on top that can read out data by detecting the boundaries between stripes when they move past.

Dafiné Ravelosona, an experimental physicist at the Institute of Fundamental Electronics in Orsay, France, leads a European collaboration working on its own version of racetrack memory. He says IBM's latest results are a crucial step along the road to commercialization for the technology. "It's a nice demonstration that shows it's possible to make this kind of memory using CMOS," he says. 

However, Ravelosona adds that the IBM work doesn't yet demonstrate all of the key components that make racetrack memory desirable. "They have only demonstrated that it is possible to move a single bit in each nanowire," he explains. 

Much of the promise of the technology lies in the potential to store many bits—using many magnetic stripes—in a single tiny nanowire, to achieve very dense data storage. Ravelosona suggests that the material used to make the nanowires in the new IBM device lacks the right magnetic properties to allow that.

Parkin says that the intention wasn't to target density but adds, "We're focusing on exactly this question." His group is currently working on how to fit as many magnetic stripes as possible into a nanowire and has begun experiments that suggest that wires made from a different type of material may do better.

The nickel-iron alloy of the integrated prototype is what's known as a soft magnetic material, because it can be easily magnetized and demagnetized by an external magnetic field. Parkin is also experimenting with hard magnetic materials, which get their magnetic properties from their tightly fixed crystalline structure and as a result are not easily demagnetized.

"Using this different material, we have discovered we can move the domain walls [between magnetic stripes] very fast and that they are much smaller and stronger than in the soft magnetic material used in the integrated devices," says Parkin. 

That means not only that it should be easier to put many stripes into one nanowire, but also that nanowires fabricated with less precision will still work, which should make fabrication easier. "I call this racetrack 2.0," he says.

By Tom Simonite
From Technology Review