Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........

Six New Isotopes of the Superheavy Elements Discovered

Information gained from the new isotopes will contribute to a better understanding of the theory of nuclear shell structure, which underlies predictions of an "Island of Stability," a group of long-lasting isotopes thought to exist amidst a sea of much shorter-lived, intrinsically unstable isotopes of the superheavy elements.

The group that found the new isotopes is led by Heino Nitsche, head of the Heavy Element Nuclear and Radiochemistry Group in Berkeley Lab's Nuclear Science Division (NSD) and professor of chemistry at the University of California at Berkeley. Ken Gregorich, a senior staff scientist in NSD, is responsible for the group's day-to-day research operation at the 88-inch Cyclotron and the Berkeley Gas-filled Separator, the instrument used to isolate and identify the new isotopes. Paul Ellison of NSD, a graduate student in the UC Berkeley Department of Chemistry, formally proposed and managed the experiment and was first author of the paper reporting the results in the 29 October 2010 issue of Physical Review Letters.

"We were encouraged to try creating new superheavy isotopes by accelerating calcium 48 projectiles with Berkeley Lab's 88-Inch Cyclotron and bombarding plutonium 242 targets inside the Berkeley Gas-filled Separator here," Nitsche says. "This was much the same set-up we used a year ago to confirm the existence of element 114."

The six new isotopes placed on the chart of heavy nuclides.

The 20-member team included scientists from Berkeley Lab, UC Berkeley, Lawrence Livermore National Laboratory, Germany's GSI Helmholtz Center for Heavy Ion Research, Oregon State University, and Norway's Institute for Energy Technology. Many of its members were also on the team that first confirmed element 114 in September of 2009. Ten years earlier scientists at the Joint Institute for Nuclear Research in Dubna, Russia, had isolated element 114 but it had not been confirmed until the Berkeley work. (Elements heavier than 114 have been seen but none have been independently confirmed.)

The nuclear shell game
Nuclear stability is thought to be based in part on shell structure -- a model in which protons and neutrons are arranged in increasing energy levels in the atomic nucleus. A nucleus whose outermost shell of either protons or neutrons is filled is said to be "magic" and therefore stable. The possibility of finding "magic" or "doubly magic" isotopes of superheavy elements (with both proton and neutron outer shells completely filled) led to predictions of a region of enhanced stability in the 1960s.

The challenge is to create such isotopes by bombarding target nuclei rich in protons and neutrons with a beam of projectiles having the right number of protons, and also rich in neutrons, to yield a compound nucleus with the desired properties. The targets used by the Berkeley researchers were small amounts of plutonium 242 (242Pu) mounted on the periphery of a wheel less than 10 centimeters in diameter, which was rotated to disperse the heat of the beam.

Gregorich notes that calcium 48 (48Ca), which has a doubly magic shell structure (20 protons and 28 neutrons), "is extremely rich in neutrons and can combine with plutonium" -- which has 94 protons -- "at relatively low energies to make compound nuclei. It's an excellent projectile for producing compound nuclei of element 114."

Ellison says, "There's only a very low probability that the two isotopes will interact to form a compound nucleus. To make it happen, we need very intense beams of calcium on the target, and then we need a detector that can sift through the many unwanted reaction products to find and identify the nuclei we want by their unique decay patterns." The 88-Inch Cyclotron's intense ion beams and the Berkeley Gas-filled Separator, designed specifically to sweep away unwanted background and identify desired nuclear products, are especially suited to this task.

Element 114 itself was long thought to lie in the Island of Stability. Traditional models predicted that if an isotope of 114 having 184 neutrons (298114) could be made, it would be doubly magic, with both its proton and neutron shells filled, and would be expected to have an extended lifetime. The isotopes of 114 made so far have many fewer neutrons, and their half-lives are measured in seconds or fractions of a second. Moreover, modern models predict the proton magic number to be 120 or 126 protons. Therefore, where 298114 would actually fall inside the region of increased stability is now in question.

"Making 298114 probably won't be possible until we build heavy ion accelerators capable of accelerating beams of rare projectile isotopes more intense than any we are likely to achieve in the near future," says Nitsche. "But in the meantime we can learn much about the nuclear shell model by comparing its theoretical predictions to real observations of the isotopes we can make."

The team that confirmed element 114 observed nuclei of two isotopes, 286114 and 287114, which decayed in a tenth of a second and half a second respectively. In a subsequent collaboration with researchers at the GSI Helmholtz Center for Heavy Ion Research, two more isotopes, 288114 and 289114, were made; these decayed in approximately two-thirds of a second and two seconds respectively.

While these times aren't long, they're long enough for spontaneous fission to terminate the series of alpha decays. Alpha particles have two protons and two neutrons -- essentially they are helium nuclei -- and many heavy nuclei commonly decay by emitting alpha particles to form atoms just two protons lighter on the chart of the nuclides. By contrast, spontaneous fission yields much lighter fragments.

A new strategy
So this year the Berkeley group decided to make new isotopes using a unique strategy: instead of trying to add more neutrons to 114, they would look for isotopes with fewer neutrons. Their shorter half-lives should make it possible for new isotopes to be formed by alpha emission before spontaneous fission interrupted the process.

"This was a very deliberate strategy," says Ellison, "because we hoped to track the isotopes that resulted from subsequent alpha decays farther down into the main body of the chart of nuclides, where the relationships among isotope number, shell structure, and stability are better understood. Through this connection, and by observing the energy of the alpha decays, we could hope to learn something about the accuracy of predictions of the shell structure of the heaviest elements."

The sum of protons and neutrons of 48Ca and 242Pu is 114 protons and 176 neutrons. To make the desired "neutron poor" 285114 nucleus, one having only 171 neutrons, first required a beam of 48Ca projectiles whose energy was carefully adjusted to excite the resulting compound nucleus enough for five neutrons to "evaporate."

"The process of identifying what you've made comes down to tracking the time between decays and decay energies," says Ellison. As a check against possible mistakes, the data from the experiment were independently analyzed using separate programs devised by Ellison, Gregorich, and team member Jacklyn Gates of NSD.

In this way, after more than three weeks of running the beam, the researchers observed one chain of decays from the desired neutron-light 114 nucleus. The first two new isotopes, 285114 itself, and copernicium 281 produced by its alpha decay, lived less than a fifth of a second before emitting alpha particles. The third new isotope, darmstadtium 277, lived a mere eight-thousandths of second. Hassium 273 lasted a third of a second. Seaborgium 269 made it to three minutes and five seconds but managed to emit an alpha particle. Finally, after another two and a half minutes, rutherfordium 265 decayed by spontaneous fission.

Ellison says, "In the grand scheme, the theoretical predictions were pretty good" when the actual measurements were compared to the decay properties predicted by modern nuclear models. "But there were small-scale interesting differences."

In particular, the heaviest new isotopes, those of 114 and copernicium, showed smaller energies associated with the alpha decay than theory predicts. These discrepancies can be used to refine the theoretical models used to predict the stability of the superheavy elements.

As Gregorich puts it, "our new isotopes are on the western shore of the Island of Stability" -- the shore that's less stable, not more. Yet the discovery of six new isotopes, reaching in an unbroken chain of decays from element 114 down to rutherfordium, is a major step toward better understanding the theory underlying exploration of the region of enhanced stability that is thought to lie in the vicinity of element 114 -- and possibly beyond.

This research was supported by the DOE Office of Science and the National Nuclear Security Administration.


Sensor Detects Emotions through the Skin

When autistic children get stressed, they often don't show it. Instead their tension might build until they have a meltdown, which can result in aggression toward others and even self-injury. Because autistic children often don't understand or express their emotions, teachers and other caregivers can have a hard time anticipating and preventing meltdowns.

Sensing emotions: The Q Sensor measures skin conductance, temperature, and motion to record a wearer’s reactions to events.

A new device developed by Affectiva, based in Waltham, Massachusetts, detects and records physiological signs of stress and excitement by measuring slight electrical changes in the skin. While researchers, doctors, and psychologists have long used this measurement--called skin conductance--in the lab or clinical setting, Affectiva's Q Sensor is worn on a wristband and lets people keep track of stress during everyday activities. The Q Sensor stores or transmits a wearer's stress levels throughout the day, giving doctors, caregivers, and patients themselves a new tool for observing reactions. Such data could provide an objective way to see and communicate what might be causing stress for a person, says Rosalind Picard, director of the Affective Computing Research Group at MIT and cofounder of Affectiva. She demonstrated the sensor last month at the Future of Health Technology Summit 2010 in Cambridge, Massachusetts. 

"This certainly sounds like interesting technology," says autism specialist Helen Tager-Flusberg, director of the Lab of Developmental Cognitive Neuroscience at Boston University. She says the sensors will need rigorous data proving their accuracy, but "the promise of new technologies like this may well improve our effectiveness to work with individuals with autism in daily life."

When a person--autistic or not--experiences stress or enters a "flight or fight" mode, moisture collects under the skin (often leading to sweating) as a sympathetic nervous system response. This rising moisture makes the skin more electrically conductive. Skin conductance sensors send a tiny electrical pulse to one point of the skin and measure the strength of that signal at another point on the skin to detect its conductivity.

"When you see this flight-or-fight response, it doesn't tell you it's definitely stress, it just tells you something has changed," says Picard. "Are they excited, hurting, are they stressed by a sound or person in the room? It doesn't perfectly correspond to stress as it can also go up with anticipation and excitement, but when you see it change, you know something's going on and you can look for the cause." She adds that having clues to a person's stress levels, which might not otherwise be detectable, could give caregivers and researchers more insight--and possibly a way to anticipate--the harmful behaviors of autism, such as head banging. Caregivers can try to identify and block sources of stress and learn what activities restore calm.

"I've been doing this for 25 years, and it's one of the most exciting things I've seen," says Kathy Roberts, founder and executive director of the Giant Steps School, an institute in Fairfield, Connecticut, for children with autism, many of whom are nonverbal and use assistive technologies, like the iPad's touch screen, to communicate. The school has been using the Q Sensors for about six months to let therapists see which activities--such as relaxation techniques like breathing exercises--affect the well-being of individual students. Aside from having difficulty communicating, many of the students have trouble understanding their feelings. "Often students can't really describe their internal state to us at all. What these sensors are allowing us to do is to have direct feedback, which allows us to see this internal state in a very concrete way," Roberts says. She adds that the Q Sensor is much easier and less obtrusive for autistic students than sitting at a monitor for biofeedback, a traditional method for analyzing emotional states. Roberts believes that the sensors have the potential to reveal more about sleep--which troubles many autistic children--and could even provide early detection for seizures.

The beta version of the device will be available to researchers and educators in November for around $2,000, says Picard. She cautions that heightened skin conductance is not an absolute measurement of stress, because it applies to excitement as well as distress. She says the information needs to be evaluated in context. Additionally, stress levels can be hard to accurately detect when the wearer is taking medication or has attention deficit hyperactivity disorder or attention deficit disorder.

The Q Sensor can be worn as part of a wristband or a smaller module that can slip beneath a sweatband or baseball cap to make it discreet. After field testing, Picard's company designed it for kids: the actual sensor--which is flat and a little less than four by four centimeters--can be wiped down, and the wristband itself can go in the washer. The device also has a temperature sensor to help correct for mistakes: it can tell, for example, when a user is entering a hot room rather than having an emotional reaction. It also has a clock, a rechargeable battery that lasts a day, an external button that lets a person put an event marker on the data, and a motion sensor that tracks movement in three directions. (It's able to distinguish, for example, when you're sitting or plummeting down a roller coaster.) To download data, a caregiver or user can plug the sensor into a PC or Mac with a USB and use software to view, compare, and annotate the data with descriptions of events during high- and low-stress periods. 

Though Picard has largely focused on using the sensor with autistic children, a team at Children's Hospital in Boston is using the Q sensors with epileptic patients in order to understand more about the onset of seizures. And a research group at Massachusetts General Hospital plans to place the sensors on babies to monitor normal growth of the autonomic nervous system. 

Monica Werner, director of the Model Asperger Program at the Ivymount School, in Rockville, Maryland, for children with learning disabilities and autism, plans to use the Q Sensor to help second-grade through 10th-grade students better moderate their emotions. She hopes it can lead to more subtle methods for reducing a child's stress, because some kinds of intervention can compound a child's anxiety. 

"The beauty of this is, it's a way of providing feedback and intervention that is much less socially intense," Werner says. The program will couple the sensor information with an app on the iPod Touch that lets students report on how they feel during a class. At the end of the day, the teachers will discuss the students' reports and physiological signs with them to figure out what went wrong and how to better solve the problems. Eventually, she hopes to make use of what she calls "the holy grail": real-time feedback, which Affectiva plans to enable in a later version of the device. 

"It's important for those of us who are therapists and teachers," Werner says, "to know when to get in there."

By Kristina Grifantini 
From Technology Review

A $1.50 Lens-Free Microscope

Using a $1.50 digital camera sensor, scientists at Caltech have created the simplest and cheapest lens-free microscope yet. Such a device could have many applications, including helping diagnose disease in the developing world, and enabling rapid screening of new drugs.

The best current way to diagnose malaria is for a skilled technician to examine blood samples using a conventional optical microscope. But this is impractical in parts of the world where malaria is common. A simple lens-free imaging device connected to a smart phone or a PDA could automatically diagnose disease. A lensless microscope could also be used for rapid cancer or drug screening, with dozens or hundreds of microscopes working simultaneously.

No lens required: Researcher Guoan Zheng injects a sample into the inlet of the optofluidic microscope.
Credit: Changhuei Yang Research 

The Caltech device is remarkably simple. A system of microscopic channels called microfluidics lead a sample across the light-sensing chip, which snaps images in rapid succession as the sample passes across. Unlike previous iterations, there are no other parts. Earlier versions featured pinhole apertures and an electrokinetic drive for moving cells in a fixed orientation with an electric field. In the new device, this complexity is eliminated thanks to a clever design and more sophisticated software algorithms. Samples flow through the channel because of a tiny difference in pressure from one end of the chip to the other. The device's makers call it a subpixel resolving optofluidic microscope, or SROFM.

"The advantage here is that it's simpler than their previous approaches," says David Erickson, a microfluidics expert at Cornell University.

Cells tend to roll end over end as they pass through a microfluidic channel. The new device uses this behavior to its advantage by capturing images and producing a video. By imaging a cell from every angle, a clinician can determine its volume, which can be useful when looking for cancer cells, for example. Changhuei Yang, who leads the lab where the microscope was developed, says this means samples, such as blood, do not have to be prepared on slides beforehand.

The current resolution of the SROFM is 0.75 microns, which is comparable to a light microscope at 20 times magnification, says Guoan Zheng, lead author of a recent paper on the work, published in the journal Lab on a Chip.

The sensor has pixels that are 3.2 microns on each side. A "super resolution" algorithm assembles multiple images (50 for each high-resolution image) to create an enhanced resolution image--as if the screen had pixels 0.32 microns in size. However, super-resolution techniques can only distinguish features that are separated by at least one pixel, meaning the final resolution must be at least twice the pixel size. This is why a .32 micron pixel size yields only a resolution of .75 microns.

Zheng's technique uses only a small portion of the chip, allowing him to capture cells at a relatively high frame rate of 300 frames per second. This yields a super-resolution "movie" of a cells at six frames per second.

Using a higher-resolution CMOS sensor should allow an even better ultimate resolution, says Seung Ah Lee, another collaborator on the project. Lee wants to get the resolution up to the equivalent of 40x magnification, so that the technique can be used for diagnosis of malaria via automated recognition of abnormal blood cells.

Aydogan Ozcan, a professor at UCLA who is developing a competing approach, says that Zheng's work is "a valuable advance for optofluidic microscopy," in that this system is simpler, offers higher resolution, and is easier to use than previous microscopes. However, Ozcan says that the technique has limitations.

The microfluidic channel must be quite small, says Ozcan, which means the approach can't be applied to particles that might vary greatly in size, and the channel must be built to accommodate the largest particle that might flow through it. Ozcan's own lensless microscope does not use microfluidic channels, and instead captures a "hologram" of the sample by interpreting the interference pattern of an LED lamp shining through it. This method has no such limitations.

"From my perspective, these are complementary approaches," says Ozcan, whose ultimate aim is cheap, cell-phone based medical diagnostic tools for the developing world.

By Christopher Mims  
From Technology Review

70 mpg, without a Hybrid

Next year, Mazda will sell a car in Japan that gets 70.5 miles per gallon (mpg), or 30 kilometers per liter. The fuel economy rating won't be nearly this good in the United States because of differing requirements, but even so, the car will likely use about as little fuel as a hybrid such as the Toyota Prius--without that car's added costs for its electric motor and batteries.

The Mazda, a subcompact called the Demio in Japan and the Mazda 2 elsewhere, will include a package of changes that improves fuel economy by about 30 percent over the current model. These include a more efficient engine and transmission, and a lighter body and suspension. The Mazda 2, and a range of new cars from other automakers that have been engineered to meet more stringent fuel economy standards, demonstrate what some experts have been saying for some time--internal combustion-powered cars are far from outdated. Indeed, improvements to gas-powered cars can reduce worldwide fuel consumption more quickly than introducing hybrids or electric vehicles, because variations on traditional engines tend to be less expensive and can be quickly implemented on more cars.

"We've been making engines for 100 years, and we keep figuring out how to make improvements in them. We will continue to figure out further improvements," says Greg Johnson, the manager of Ford's North American powerpacks. "For another 50 years, if not more, the internal combustion engine will be the primary driver." This week, Ford announced changes to its Focus model that improve its fuel economy by about 17 percent, to an estimated 40 mpg.

Efficient engine: A rendering of Mazda’s new engine. Compared to its predecessor, it has 15 percent better fuel economy and 15 percent more torque, improving vehicle acceleration. The extra-long, curved exhaust pipes on the front (in black) help the engine draw more energy from a gallon of gas.

Mazda says the biggest source of improvement for the Mazda 2 is a new engine that compresses the fuel-air mixture in the engine far more than conventional gasoline engines do. Ordinarily, gas engines have about a 10-to-1 compression ratio. Mazda increased this to 14 to 1, a level typically seen only in diesel engines. Increasing compression has long been known to increase efficiency, but compressing the fuel-air mixture too much causes it to ignite prematurely--before the spark sets it off--a phenomenon called knocking. That decreases performance and can damage the engine. Mazda has introduced innovations to avoid knocking. 

As a number of automakers, including Ford, are doing, Mazda has introduced direct injection--which involves spraying fuel directly into the engine's combustion chamber rather than into an adjacent port. Doing this cools the chamber, which helps prevent premature ignition. Mazda also modified the exhaust system--increasing the length and shape of the exhaust pipes to allow more exhaust gas to escape after combustion. Removing these hot gases also keeps the temperature down, but it has the drawback of interfering with emissions controls. That required other changes in the engine, including modifying ignition timing and the shape of the pistons.

Mazda also found that above a certain compression ratio, some of the bonds in gasoline molecules begin to break, generating heat. These reactions increase the total amount of energy released from the gasoline, improving efficiency, the company says. To take advantage of this phenomenon, the engineers set the ignition timing to occur after these bonds start to break. 

Just as important for improving fuel economy were a new transmission and a redesign of the frame to use less steel, or to use lighter, high-tensile steel. Mazda also says it redesigned the suspension system to make it lighter without sacrificing performance. Mazda has also announced a diesel engine that could be about 20 percent more efficient than the new gasoline one.

The 70.5 mpg rating the car received in Japan isn't a clear indication of what Mazda 2's rating will be in the United States, which has different test procedures, safety requirements, and emissions requirements. The current version of the Mazda 2 was rated at 54 mpg in Japan, but only 35 mpg (for the manual transmission version) in the United States. Michael Omotoso, manager of the power train forecasting group at J.D. Power and Associates, estimates that the new car could be rated between 50 and 60 mpg in the U.S., giving it a chance to eclipse the 51 mpg rating of the Prius (which gets 48 mpg on the highway). 

Mazda will introduce the new engine and transmission in a number of vehicles next year, although it has not announced the specific models, or when the new Mazda 2 will be available in the United States. The new engine and transmission will be introduced in the United States next year in a larger car that will get about 43 mpg. 

Although the new Mazdas avoid the costly motor, power electronics, and battery pack required in a hybrid, the improvements will likely add to the cost of the cars. Volkswagen recently introduced an 83-mpg diesel vehicle that wasn't successful because of the high costs of achieving these fuel economy levels, Omotoso says. Mazda hasn't announced prices yet. "I would think they learned a lesson from Volkswagen," he says.

By Kevin Bullis 
From Technology Review

Graphene Transistors Do Triple Duty in Wireless Communications

Graphene's potential was recognized earlier this month when those who first studied it in the labwon the 2010 Nobel Prize in Physics. But researchers are just beginning to figure out how to take advantage of the novel carbon material in electronic devices.

Researchers have already made blisteringly fast graphene transistors. Now they've used graphene to make a transistor that can be switched between three different modes of operation, which in conventional circuits must be performed by three separate transistors. These configurable transistors could lead to more compact chips for sending and receiving wireless signals.

Triple transistor: Single graphene transistors like this one can be made to operate in three modes and perform functions that usually require multiple transistors in a circuit. 

Chips that use fewer transistors while maintaining all the same functions could be less expensive, use less energy, and free up room inside portable electronics like smart phones, where space is tight. The new graphene transistor is an analog device, of the type that's used for wireless communications in Bluetooth headsets and radio-frequency identification (RFID) tags.

Graphene's perfect structure at the atomic level provides smooth sailing for electrons, and the material conducts electrons better than any other materials do at room temperature. So far, it's been used to make transistors that switch at about 100 gigahertz, or 100 billion times per second, 10 times faster than the best silicon transistors; it's predicted the material could be made into transistors that are even 1,000 times faster than this. And because graphene is smooth and flat, it should be compatible with the chip-making equipment at semiconductor fabs.

But graphene offers other properties besides just being a great conductor of electrons, says Kartik Mohanram, professor of electrical and computer engineering at Rice University. It's also possible to change the behavior of a graphene transistor on the fly, something that can't be done with conventional silicon transistors. The transistors that make up conventional silicon logic circuits can only behave in one of two ways, called "n" for negative or "p" for positive--they either control the flow of electrons or the flow of "holes," or positive charges. Whether a conventional transistor is p-type or n-type is determined during fabrication. But graphene is ambipolar: it can conduct both positive and negative charges.

Mohanram has designed a transistor that can be changed, and has made and tested it with Alexander Balandin, professor of materials science and engineering at the University of California, Riverside. By changing the voltage applied to a sheet of graphene using three electrical gates, they could switch the graphene between three different modes: n-type, p-type, and a mode where it conducted positive and negative charge equally. This triple-mode transistor acts as an amplifier and can be used to encode a data stream by changing the frequency and the phase of a signal. Changes in phase and frequency are used to encode data in telecommunications devices such as Bluetooth headsets and RFID tags.

Mohanram and Balandin's device is the first that can do this level of signal processing in a single transistor. Usually such signaling requires multiple transistors. Their transistor is a proof-of-concept device, but Mohanram says it demonstrates what might be possible with graphene.

Other groups have demonstrated multimode transistors using graphene, carbon nanotubes, and organic molecules. The researchers say that the new graphene triple-mode circuit can be controlled better than those devices.

Control is critical when designing transistors that are ambipolar, says Subhasish Mitra, professor of electrical engineering and computer science at Stanford University. "People used to consider ambipolarity a bad thing" because it's typically difficult to control how an ambipolar transistor will behave, which makes it difficult to use them at all, he says.

Mitra notes that the benefits shown at the single-transistor level must now be demonstrated in systems. The electrical gates needed to control the behavior of arrays of ambipolar transistors might end up making circuits much harder to design and fabricate. "Now that they have shown that they can do this, we need to see what benefit it brings at a system level," he says.

Balandin and Mohanram are now working on graphene circuits to test the benefits of ambipolarity at a higher level. They're also changing the design of the transistors themselves to make them more efficient.

No one has yet published any articles on the creation of integrated circuits made of graphene transistors, but Balandin says researchers are now on the verge of putting it all together. As materials scientists and device fabricators work on overcoming the challenges of working with graphene, says Mohanram, circuit designers should keep pace with them and think creatively about ambipolarity and other possibilities opened up by graphene and other nanomaterials. "New designs and new ways of thinking can lag behind the development of new materials," he says.

By Katherine Bourzac 
From Technology Review

The Moon Hides Ice Where the Sun Don’t Shine

The moon is pockmarked with cold, wet oases that could contain enough water ice to be useful to manned missions.

A year after NASA’s Lunar Crater Observation and Sensing Satellite (LCROSS) smashed into the surface of the moon, astronomers have confirmed that lunar craters can be rich reservoirs of water ice, plus a pharmacopoeia of other surprising substances.

The debris plume about 20 seconds after LCROSS impact.

On Oct. 9, 2009, the LCROSS mission sent a spent Centaur rocket crashing into Cabeus crater near the moon’s south pole, a spot previous observations had shown to be loaded with hydrogen. A second spacecraft flew through the cloud of debris kicked up by the explosion to search for signs of water and other ingredients of lunar soil.

And water appeared in buckets. The first LCROSS results reported that about 200 pounds of water appeared in the plume. A new paper in the Oct. 22 Science ups the total amount of water vapor and water ice to 341 pounds, plus or minus 26 pounds.

Given the total amount of soil blown out of the crater, astronomers estimate that 5.6 percent of the soil in the LCROSS impact site is water ice. Earlier studies suggested that soils containing just 1 percent water would be useful for any future space explorers trying to build a permanent lunar base.

“The number of 1 percent was generally agreed to as what was needed to be a net profit, a net return on the effort to extract it out of the dark shadows,” said NASA planetary scientist Anthony Colaprete in a press conference Oct. 21. “We saw 5 percent, which means that indeed where we impacted would be a net benefit to somebody looking for that resource.”

Water could lurk not just in the moon’s deep dark craters, but also as permafrost beneath the sunlit surface. Based on the impact data, water is probably mixed in to the soil as loose ice grains, rather than spread out in a concentrated skating rink. This distribution could make the water easier to harvest.

“The water ice is in this rather malleable, dig-able kind of substrate, which is good,” Colaprete said. “At least some of the water ice, you could go in and literally just scoop it up if you needed to.”

But the plume wasn’t just wet. A series of papers in Science report observations from both LCROSS and LRO that show a laundry list of other compounds were also blown off the face of the moon, including hydroxyl, carbon monoxide, carbon dioxide, ammonia, free sodium, hydrogen, methane, sulfur dioxide and, surprisingly, silver.

Temperature map of the lunar south pole from the LRO Diviner Lunar Radiometer Experiment, showing several intensely cold impact craters. UCLA/NASA/Jet Propulsion Laboratory, Pasadena, Calif./Goddard

The impact carved out a crater 80 to 100 feet wide, and kicked between 8,818 pounds and 13,228 pounds of debris more than 6 miles out of the dark crater and into the sunlight where LCROSS could see it. Astronomers, as well as space enthusiasts watching online, expected to see a bright flash the instant the rocket hit, but none appeared.

The wimpy explosion indicates that the soil the rocket plowed into was “fluffy, snow-covered dirt,” said NASA chief lunar scientist Michael Wargo.

The soil is also full of volatile compounds that evaporate easily at room temperature, suggests planetary scientist Peter Schultz of Brown University, lead author of one of the new papers. The loose soil shielded the view of the impact from above.

Data from an instrument called LAMP (Lyman Alpha Mapping Project) on LRO shows that the vapor cloud contained about 1256 pounds of carbon monoxide, 300 pounds of molecular hydrogen, 350 pounds of calcium, 265 pounds of mercury and 88 pounds of magnesium. Some of these compounds, called “super-volatile” for their low boiling points, are known to be important building blocks of planetary atmospheres and the precursors of life on Earth, says astronomer David Paige of the University of California, Los Angeles.
Compared to the amount of water in the crater, the amounts of these materials found were much greater than what is usually found in comets, the interstellar medium, or what is predicted from reactions in the protoplanetary disk.

“It’s like a little treasure trove of stuff,” said planetary scientist Greg Delory of the University of California, Berkeley, who was not involved in the new studies.

Astronomers picked Cabeus crater partly because its floor has been in constant shadow for billions of years. Without direct sunlight, temperatures in polar craters on the moon can drop as low as -400 degrees Fahrenheit, cold enough for compounds to stick to grains of soil the way your tongue sticks to an ice cube.

Other factors, like micrometeorite impacts and ultraviolet photons that carry little heat but significant amounts of energy, can release these molecules from the moon’s cold traps. The composition of the lunar surface represents a balancing act between what sticks and what is released.

The fact that so many different materials, most of which are usually gaseous at room temperature and react easily with other chemicals, remain stuck to the moon gives astronomers clues as to how they got there.

“Perhaps the moon is presently active and there’s all kinds of chemistry going on and stuff being produced, continually collecting in these polar regions,” Delory said. “Maybe it’ll tell us the moon is in fact a much more active and dynamic system than we thought, and there’s water being concentrated at the poles by present-day ongoing processes.”

Another possibility is that these materials hitched a ride on comets or asteroids, Schultz suggests. Compounds deposited all over the moon could have migrated to the poles over the course of billions of years, where they were trapped by the cold or buried under the soil.

There’s only one sure way to find out.
“We need to go there,” Delory said. Whether the water will be a useful resource for future astronauts or not, the ice itself is a rich stockpile of potential scientific information, he said. “That’s as much a reason to go there, for the story that this water tells.”

By Lisa Grossman

Energy Revolution Key to Complex Life: Depends on Mitochondria, Cells' Tiny Power Stations

"The underlying principles are universal. Energy is vital, even in the realm of evolutionary inventions," said Dr Lane, UCL Department of Genetics, Evolution and Environment. "Even aliens will need mitochondria."

For 70 years scientists have reasoned that evolution of nucleus was the key to complex life. Now, in work published in Nature, Lane and Martin reveal that in fact mitochondria were fundamental to the development of complex innovations like the nucleus because of their function as power stations in the cell.

"This overturns the traditional view that the jump to complex 'eukaryotic' cells simply required the right kinds of mutations. It actually required a kind of industrial revolution in terms of energy production," explained Dr Lane.

Artist's rendering of basic cell structure, including mitochondria. 

At the level of our cells, humans have far more in common with mushrooms, magnolias and marigolds than we do with bacteria. The reason is that complex cells like those of plants, animals and fungi have specialized compartments including an information centre, the nucleus, and power stations -- mitochondria. These compartmentalised cells are called 'eukaryotic', and they all share a common ancestor that arose just once in four billion years of evolution.

Scientists now know that this common ancestor, 'the first eukaryote', was a lot more sophisticated than any known bacterium. It had thousands more genes and proteins than any bacterium, despite sharing other features, like the genetic code. But what enabled eukaryotes to accumulate all these extra genes and proteins? And why don't bacteria bother?

By focusing on the energy available per gene, Lane and Martin showed that an average eukaryotic cell can support an astonishing 200,000 times more genes than bacteria.

"This gives eukaryotes the genetic raw material that enables them to accumulate new genes, big gene families and regulatory systems on a scale that is totally unaffordable to bacteria," said Dr Lane. "It's the basis of complexity, even if it's not always used."

"Bacteria are at the bottom of a deep chasm in the energy landscape, and they never found a way out," explained Dr Martin. "Mitochondria give eukaryotes four or five orders of magnitude more energy per gene, and that enabled them to tunnel straight through the walls of the chasm."

The authors went on to address a second question: why can't bacteria just compartmentalise themselves to gain all the advantages of having mitochondria? They often made a start but never got very far.

The answer lies in the tiny mitochondrial genome. These genes are needed for cell respiration, and without them eukaryotic cells die. If cells get bigger and more energetic, they need more copies of these mitochondrial genes to stay alive.

Bacteria face exactly the same problem. They can deal with it by making thousands of copies of their entire genome -- as many as 600,000 copies in the case of giant bacterial cells like Epulopiscium, an extreme case that lives only in the unusual guts of surgeonfish. But all this DNA has a big energetic cost that cripples even giant bacteria -- stopping them from turning into more complex eukaryotes. "The only way out," said Dr Lane, "is if one cell somehow gets inside another one -- an endosymbiosis."

Cells compete among themselves. When living inside other cells they tend to cut corners, relying on their host cell wherever possible. Over evolutionary time, they lose unnecessary genes and become streamlined, ultimately leaving them with a tiny fraction of the genes they started out with: only the ones they really need.

The key to complexity is that these few remaining genes weigh almost nothing. Calculate the energy needed to support a normal bacterial genome in thousands of copies and the cost is prohibitive. Do it for the tiny mitochondrial genome and the cost is easily affordable, as shown in the Nature paper. The difference is the amount of DNA that could be supported in the nucleus, not as repetitive copies of the same old genes, but as the raw material for new evolution.

"If evolution works like a tinkerer, evolution with mitochondria works like a corps of engineers," said Dr Martin.

The trouble is that, while cells within cells are common in eukaryotes, which often engulf other cells, they're vanishingly rare in more rigid bacteria. And that, Lane and Martin conclude, may well explain why complex life -- eukaryotes -- only evolved once in all of Earth's history.


Cheap Diesel-Powered Fuel Cells

A Norwegian company is developing silent diesel generators based on a new kind of fuel cell. Nordic Power Systems, which is making the generators for that country's military, has successfully tested a 250-watt solid-acid fuel cell developed by SAFCell, a spinoff from Caltech. The companies are now working on a 1.2-kilowatt system.

Solid-acid fuel cells are still at an early stage of development. But SAFCell says that they're simpler than conventional fuel cells, and the key components (such as the electrolyte) can be made from relatively cheap materials. The researchers developing the technology think it could be cheap enough to replace the turbines used in high-efficiency power plants. (The high cost of existing fuel cells limits them to niche applications, such as backup power.) 

 Power plants: These two prototype fuel-cell stacks from SAFCell generate electricity from hydrogen, even if it’s derived from diesel fuel and is contaminated with as much as 20 percent carbon monoxide. Both are made of 10 connected fuel cells. The small one--measuring three inches in diameter--generates 30 watts, and the larger one 200 watts.

The new generators work by producing hydrogen gas from diesel in a process called reforming (the fuel is heated, but not combusted, and mixed with air and steam). The hydrogen is then fed into the fuel cell to make electricity. Unlike the fuel cells that have been tested in cars, the new ones can tolerate impurities, such as carbon monoxide, that are present in hydrogen made from diesel. In large-scale production, the new fuel cells could also be significantly cheaper than high temperature solid-oxide fuel cells, such as those being sold by Bloom Energy, because they operate at lower temperatures, and so don't require expensive heat-tolerant materials, says Calum Chisolm, SAFCell's CEO.

Solid-acid fuel cells were first demonstrated in the lab 10 years ago. They're based on solid acids that are good at conducting hydrogen ions, or protons, a class of chemicals that were discovered in the 1980s, but were thought to be impractical for fuel cells because they dissolve in water, which is produced when fuel cells combine hydrogen and oxygen. Sossina Haile, a professor of materials science and chemical engineering at Caltech, and her colleagues found a simple way around this problem: operate the fuel cells at temperatures high enough to turn the water into steam, which doesn't dissolve the solid acids.

The resulting fuel cells combined the benefits of two main types of fuel cells: Polymer-electrolyte-membrane fuel cells and solid-oxide cells. Polymer-electrolyte-membrane fuel cells, the type GM and other car companies use in their prototype fuel-cell vehicles, are convenient because they run at low temperatures. But at these low temperatures, carbon monoxide can collect on catalysts and prevent them from doing their job. This requires them to use purified hydrogen fuel, which isn't widely available. The new solid-acid fuel cells can run at higher temperatures (250 °C instead of 90 °C) at which carbon monoxide isn't a problem, so they can run on hydrogen made on the spot from natural gas and even relatively dirty fuels such as diesel, which is far more readily available than hydrogen. 

In their ability to use a range of fuels, the new fuel cells are like solid-oxide ones. But the latter typically operate at high temperatures--800 °C to 1,000 °C--and require expensive materials. The new fuel cells, once in commercial production, are expected to cost about as much as solid-oxide fuel cells being sold by Bloom Energy, Chisolm says, but costs could come down quickly to about a tenth of the cost of the Bloom technology as the company develops and implements a range of cost saving measures. At that point, the fuel cells would be cheap enough to be competitive with high-efficiency turbines used in power plants. 

One key challenge is reducing the amount of platinum catalyst used, says Robert Savinell, a chemical engineering professor at Case Western Reserve University. Haile and the researchers at SAFCell have already identified a platinum-palladium catalyst and catalyst deposition methods that both reduce the amount of platinum required and increase power output, but the amount of platinum needs to be reduced more. They're developing new catalysts that take advantage of the fact that the system works at relatively low temperatures. 

Another option is recycling the platinum, a relatively simple process because of the chemical composition of the fuel cells, Chisolm says. That, combined with a good financing plan, could allow the fuel cells to hit the $1,000-per-kilowatt milestone widely regarded as the point at which fuel cells will see mass adoption, he says. 

By Kevin Bullis 
From Technology Review

Chinese Chip Closes In on Intel, AMD

At this year's Hot Chips conference at Stanford University, Weiwu Hu, the lead architect of the "national processor" of China, revealed three new chip designs. One of them could enable China to build a homegrown supercomputer to rank in a prestigious list of the world's fastest machines.

Chip challenger: Lemote is one of a handful of companies manufacturing Loongson-powered Netbooks, mostly for the Chinese market.

The Loongson processor family (known in China by the name Godson), is now in its sixth generation. The latest designs consist of the one-gigahertz, eight-core Godson 3B, the more powerful 16-core, Godson 3C (with a speed that is currently unknown), and the smaller, lower-power one-gigahertz Godson 2H, intended for netbooks and other mobile devices. The Godson 3B will be commercially available in 2011, as will the Godson 2H, but the Godson 3C won't debut until 2012.

According to Tom Halfhill, industry analyst and editor of Microprocessor Report, the eight-core Godson 3B will still be significantly less powerful than Intel's best chip, the six-core Xeon processor. It will be able to perform roughly 30 percent fewer mathematical calculations per second. Intel's forthcoming Sandy Bridge processor and AMD's Bulldozer processor will widen the gap between chips designed by American companies and the Godson 3B.

However, China's chip-making capabilities are improving quickly. Intel's Xeon processor uses a 32-nanometer process (meaning the smallest components can be formed on this scale), while the Godson 3B uses 65 nanometers, leading to significantly slower processing speeds. But the Godson 3C processor will leapfrog current technology by using a 28-nanometer process, although this will only increase its clock speed by about a factor of two, estimates Halfhill. With its eight additional cores, this should make the 3C about four times as fast as the Godson 3B.

Hu, lead architect of the Godson project, said via e-mail that China's Dawning 6000 supercomputer, originally slated for completion in mid-2010, will instead debut in 2011, using the Godson 3B. Halfhill calculates that the Dawning supercomputer will use CPUs that are slower than fastest Intel chips. However, it could still rank on the Top 500 list of the 500 fastest supercomputers in the world--a significant coup for China's fledgling electronics industry. "Just getting into the Top 500 with a native processor is a worthy accomplishment," says Halfhill.

The Loongson processor is based on the MIPS instruction set, the basic commands that a microprocessor understands. In contrast, Intel and AMD processors are based on the x86 instruction set. Engineers at China's Institute of Computing Technology (ICT) have added more than 300 instructions to the MIPS instruction set in the latest generation of the Loongson processor, and most are devoted to vector processing, a technique for processing data in parallel that can speed operations like graphics and scientific processing. The Dawning 6000 would mark the first time a MIPS-based supercomputer has appeared in the Top 500 list since 2004.

The ongoing development of the Loongson processor family is good news for Stanford-based MIPS Technologies, which licenses the MIPS instruction set and competes with the x86, ARM, and IBM Power architectures. "It's our view that the ICT team and the MIPS instruction set are in a leading position for the [Chinese] government-driven national processor effort," says Art Swift, vice president of marketing at MIPS.
At the low end of the Godson family of processors, the new 2H chip is an incremental improvement compared to previous chips in the Godson 2 series, says Halfhill. According to Hu, the chip is designed for netbooks, other mobile devices, low-powered PCs and embedded systems.

An important factor for the Godson 2 series has been the porting of Google's Android operating system (used in smart phones, and in some tablets and netbooks) to the MIPS instruction set, says Swift, who adds that ICT engineers were very active in that effort. "The uptake of Android in China was phenomenal; they were way ahead of everyone else, and the whole rest of the field has followed," Swift says.

Hu has emphasized in the past that a primary goal of ICT's "national processor" effort is the creation of an affordable chip that can help bring China out of the industrial age and into the information age.

"I think what they're really after is a national processor that is broadly used and displaces the Intel monopoly," says Swift.

Displacing the Intel monopoly does not necessarily mean displacing the Windows monopoly, however. Despite ICT's emphasis on Android and open-source software, the Loongson family includes many instructions designed to speed up emulation of the x86 instruction set, and the Microsoft architecture team attended Hu's presentation at Hot Chips, according to Swift. "I wouldn't rule out this being a great Windows processor at some point," he says.

The Loongson family of processors may, however, face a fundamental challenge to its ability to compete with other architectures in terms of performance.

The Godson processor appears to have been designed primarily with automated circuit design tools, which is common throughout the microprocessor industry, but the processor has not been manually tweaked by engineers, which is not. This could mean unnecessary bottlenecks in the flow of data through the processor. "That's always been a puzzle to me," says Halfhill. "It's not like there is a shortage of circuit designers in China."

One of the most unexpected surprises of the Hot Chips presentation was the acknowledgement that if ICT's current fabrication partner, STMicro, is unable to produce the Godson 3C in a 28-nanometer process by 2011, production could be moved to the Taiwan Semiconductor Manufacturing Company. Historically, China and Taiwan have had chilly political relations even as their economic interdependence has increased.

Government support for ICT's national processor project was reaffirmed Monday when it was announced that the chip will be part of the country's 12th Five-Year Plan. If the Godson 3B shows up in a supercomputer by 2011, it will be an important milestone in China's billion-dollar effort to cultivate a homegrown CPU.

By Christopher Mims  
From Technology Review

Bendable Memory Made from Nanowire Transistors

Researchers in the U.K. have made a new kind of nanoscale memory component that could someday be used to pack more data into gadgets. The device stores bits of information using the conductance of nanoscale transistors made from zinc oxide.
Memory flexing: Nanowire transistors that switch between four different conductance states can be made on plastic substrates.

The researchers published a paper about a prototype memory device fabricated on a rigid silicon substrate last week in the online version of the journal Nano Letters. They are now testing flexible memory devices in the laboratory, says Junginn Sohn, a researcher at the University of Cambridge Nanoscience Center and lead author of the Nano Letters paper.

The nanowire device stores data electrically and is nonvolatile, meaning it retains data when the power is turned off, like the silicon-based flash memory found in smart phones and memory cards. The new memory cannot hold data for as long as flash, and it is slower and has fewer rewrite cycles, but it could potentially be made smaller and packed together more densely. And its main advantage, says Sohn, is that it is made using simple processes at room temperature, which means it can be deposited on top of flexible plastic materials. Nanowire memory could, for instance, be built into a flexible display and could be packed into smaller spaces inside cell phones, MP3 players, plastic RFID tags, and credit cards.

Flash memory elements contain transistors that store bits of data (a 1 or 0) using the presence or absence of charge on a gate electrode. However, like other silicon-based electronic devices, flash faces physical limits in terms of how much it can be miniaturized. Memory elements are already at 25 nanometers, translating to data densities of one terabit per square inch, and are projected to reach their minimum size limit of about 20 nanometers by late 2011. Companies are increasing flash memory densities by packing twice the amount of data by storing two bits, or four values, in each cell: 00, 01, 10, and 11.

The nanowire device can also store four values, as different levels of conductance. It is based on a zinc-oxide nanowire transistor, which the researchers make by placing a nanowire on a silicon substrate and depositing source and drain electrodes at either end of the wire. They coat the wire with barium-titanate nanoparticles and deposit an aluminum gate electrode layer on top.

A positive gate voltage builds up positive charges on each nanowire and puts the device in a high conductance state. A negative voltage switches it to a low conductance state. The researchers used four different voltages to create four conductance states.

Others are exploring various approaches to make flexible nanoscale memory that could be scaled down beyond silicon. Memory elements based on transistors made from ferroelectric-coated graphene, carbon nanotube, and other nanowires could all eventually be made flexible. Some groups have made flexible memory using organic materials, as well as two-electrode devices based on thin films of titanium dioxide and graphene.

Compared to the zinc oxide transistors, which have three electrodes, two-terminal devices can be packed more densely and potentially even in 3-D, says Curt Richter a researcher at the National Institute of Standards and Technology. Richter has made flexible titanium dioxide memory. But since the electronic read-and-write circuits in today's flash memory are designed for silicon transistors, "the advantage of nanowire transistors is you wouldn't have to change the control logic and architecture so much. You could tweak it a little bit and plug the device in."

A lot of work remains to be done to make the nanowire memory devices practical. Right now, they only retain their conductance state for over 11 hours about 70 times--flash memory can withstand about 100,000 writing cycles. And the nanowires are currently 100 nanometers wide and two micrometers long, although Sohn says they can potentially be scaled down to smaller than flash memory. 

The researchers will also have to prove that the memory devices are fast, says James Tour, a chemistry professor at Rice University who is working on ultradense memory using graphite and silicon oxide. In the paper, the researchers rewrite the device every second. Flash memory, by comparison, has a writing speed of microseconds. "At one second, it is not even on the table for being interesting to device folks," Tour says. "They must increase the speed one million-fold to begin to catch their attention." Because it is hard to make large batches of nanowires that work uniformly and to align them on surfaces, Tour says, "doing anything on nanowire electronics would not be feasible in mass-production."

But Georgia Tech materials scientist Zhong Lin Wang, who has made nanogenerators and sensors from zinc oxide nanowires, says that the new memory could be integrated with those devices, paving the way for a completely new kind of electronics technology based on zinc oxide. "This paper demonstrates an exciting application of zinc oxide nanowires as nonvolatile memory, which is a key component for future flexible electronics," he says. 

By Prachi Patel
From Technology Review

I Win, You Lose: Brain Imaging Reveals How We Learn from Our Competitors

The team from Bristol University led by Dr Paul Howard-Jones, Senior Lecturer in Education in the Graduate School of Education and Dr Rafal Bogacz, Senior Lecturer in the Department of Computer Science, scanned the brains of players as they battled against an artificial opponent in a computer game.

Top: Our neural activity tends to be stimulated by our competitor's errors (as in the example shown here) rather than their successes. Bottom: This region of the mirror neuron system in the player's motor cortex increased its activity when the player made moves and also when they observed their computer opponent making the same "virtual" moves -- even though they knew it was a computer. 

In the game, each player took turns with the computer to select one of four boxes whose payouts were simulating the ebb and flow of natural food sources.

Players were able to learn from their own successful selections but those of their competitor failed completely to increase their neural activity. Instead, it was their competitor's unexpected failures that generated this additional brain activity. Such failures generated both reward signals in the brains of the players, and learning signals in regions involved with inhibiting response. This suggests that we benefit from our competitors' failures by learning to inhibit the actions that lead to them.

Surprisingly, when players were observing their competitor make selections, the players' brains were activated as if they were performing these actions themselves. Such 'mirror neuron' activities occur when we observe the actions of other humans but here the players knew their opponent was just a computer and no animated graphics were used. Previously, it has been suggested that the mirror neuron system supports a type of unconscious mind-reading that helps us, for example, judge others' intentions.

Dr Howard-Jones added: "We were surprised to see the mirror neuron system activating in response to a computer. If the human brain can respond as though a computer has a mind, that's probably good news for those wishing to use the computer as a teacher."


Experimental Drug Preserves Memory in Rodents

An experimental drug developed by researchers at the University of Edinburgh reverses age-related memory decline in mice, returning their brains to a more youthful state of cognitive function. The compound is designed to dampen the production of glucocorticoids, stress hormones that are thought to damage the brain's learning and memory centers over time. 

"What's most surprising is that even short-term inhibition was able to reverse memory loss in old mice," says Jonathan Seckl, a professor of molecular medicine who was involved in the research. "I don't think people had realized this was so reversible. It takes [the animals] back to being relatively young." 

The researchers hope to develop equivalent human therapies and are now more extensively studying the safety of a closely related compound in animals. They aim to begin human testing within a year. 

Scientists have long known that glucocorticoids--a class of steroid hormones that mediate our response to stressful situations--play a role in age-related memory decline. Although short-term exposure to glucocorticoids enhances the formation of memories during stressful situations, chronically high levels of the hormones are linked to greater memory loss with age, both in humans and animals. The exact mechanism underlying this link is unclear, but researchers theorize that excess exposure to the hormones makes parts of the brain more vulnerable to damage.

Seckl and his collaborators focused on an enzyme called 11β-hydroxysteroid dehydrogenase type 1 (11 β-HSD1). This enzyme generates an active version of the key glucocorticoid hormone within brain cells and some other tissues, providing a target for fine-tuning the system without blocking the overall stress response. Tinkering with the enzyme seems to have little effect on blood levels of glucocorticoids, which are produced in the adrenal glands. Instead, "this enzyme acts as an intracellular amplifier of glucocorticoids," says Seckl. "If you take out the amplifier, you still have stress hormones, but they shout less loudly and cause less wear and tear." 

The Edinburgh team showed that knocking out either one or both copies of the gene for this enzyme in mice preserved the animals' memory into old age. To determine whether blocking the enzyme could improve memory in already aged animals, researchers then developed a compound designed to cross into the brain and inhibit the enzyme. Just 10 days of treatment in two-year-old mice--the maximum lifespan for a typical lab mouse--was enough to improve the animals' performance on a test of spatial memory. The treatment "returned mice to the equivalent of when they were young and fully functioning," says Brian Walker, another researcher involved in the study. "It's important to emphasize that we are trying to target the pathology--the role that glucocorticoids play in age-related memory decline--not just globally improving memory." The research was published last week in the Journal of Neuroscience.

"As a proof-of-concept study in animals, it's fascinating," says Bruce McEwen, a neuroscientist at Rockefeller University in New York, who was not involved in the research. "It may bring back cortisol from a level that inhibits function to a level that facilitates it."

While it's not yet clear how the experimental compound will affect memory in humans, some evidence suggests that drugs with a similar mechanism may be effective in older people. A drug called carbenoxolone, which inhibits the 11 β-HSD1 enzyme, among others, and which has been prescribed for stomach ulcers, improves some types of memory in both healthy older men and those with type 2 diabetes. (Type 2 diabetes, which is thought to be a risk factor for Alzheimer's, is linked to increased glucocorticoid levels in humans.) Carbenoxolone isn't appropriate for treating memory loss, however, Seckl says, because it has serious side effects, such as increasing blood pressure. 

Researchers estimate that roughly 20 to 30 percent of people age 75 and older have elevated glucocorticoid levels (more precise figures aren't available). Most affected is so-called declarative memory, the ability to learn new facts and remember, for example, lists. People who suffer age-related memory loss are at higher risk for developing Alzheimer's and other forms of dementia, though the condition is not itself considered a form of dementia.

The Edinburgh compound has so far been tested only in animal models of normal aging, so scientist don't know if the same approach would help more severe memory impairments. "Whether it would work in Alzheimer's disease or something more aggressive, we don't yet know," says Seckl. His team is now studying the compound in mice that have been genetically engineered to suffer some of the symptoms of Alzheimer's. 

Previous attempts to dampen glucocorticoids' effect on memory have had only limited success. Blocking the hormone in the blood interferes with the body's normal stress response, and the benefits of blocking the hormone receptor appear to wear off over time, even with continued use of the drug. Most existing drugs designed to improve cognitive function, such as the Alzheimer's drugs memantine and donepezil, act directly on neurotransmitters, chemical messengers in the brain. 

The new drug may also have some beneficial side effects. Earlier animal experiments showed that blocking the enzyme in mice improves insulin sensitivity and blood sugar levels. A number of pharmaceutical companies are developing similar compounds for the treatment of type 2 diabetes.

By Emily Singer 
From Technology Review

Gene's Location on Chromosome Plays Big Role in Shaping How an Organism's Traits Evolve

Physical traits found in nature, such as height or eye color, vary genetically among individuals. While these traits may differ significantly across a population, only a few processes can explain what causes this variation -- namely, mutation, natural selection, and chance.

In the Science study, the NYU and Princeton researchers sought to understand, in greater detail, why traits differ in their amount of variation. But they also wanted to determine the parts of the genome that vary and how this affects expression of these physical traits. To do this, they analyzed the genome of the worm Caenorhabditis elegans (C. elegans). C. elegans is the first animal species whose genome was completely sequenced. It is therefore a model organism for studying genetics. In their analysis, the researchers measured approximately 16,000 traits in C. elegans. The traits were measures of how actively each gene was being expressed in the worms' cells.

 New research shows that a gene's location on a chromosome plays a significant role in shaping how an organism's traits vary and evolve.

The researchers began by asking if some traits were more likely than others to be susceptible to mutation, with some physical features thus more likely than others to vary. Different levels of mutation indeed explained some of their results. Their findings also revealed significant differences in the range of variation due to natural selection -- those traits that are vital to the health of the organism, such as the activity of genes required for the embryo to develop, were much less likely to vary than were those of less significance to its survival, such as the activity of genes required to smell specific odors.

However, these results left most of the pattern of variation in physical traits unexplained -- some important factor was missing.

To search for the missing explanation, the researchers considered the make-up of C. elegans' chromosomes -- specifically, where along its chromosomes its various genes resided.

Chromosomes hold thousands of genes, with some situated in the middle of their linear structure and others at either end. In their analysis, the NYU and Princeton researchers found that genes located in the middle of a chromosome were less likely to contribute to genetic variation of traits than were genes found at the ends. In other words, a gene's location on a chromosome influenced the range of physical differences among different traits.

The biologists also considered why location was a factor in the variation of physical traits. Using a mathematical model, they were able to show that genes located near lots of other genes are evolutionarily tied to their genomic neighbors. Specifically, natural selection, in which variation among vital genes is eliminated, also removes the differences in neighboring genes, regardless of their significance. In C. elegans, genes in the centers of chromosomes are tied to more neighbors than are genes near the ends of the chromosomes. As a result, the genes in the center are less able to harbor genetic variation.

The research was conducted by Matthew V. Rockman, an assistant professor at New York University's Department of Biology and Center for Genomics and Systems Biology as well as Sonja S. Skrovanek and Leonid Kruglyak, researchers at Princeton University's Lewis-Sigler Institute for Integrative Genomics, Department of Ecology and Evolutionary Biology, and Howard Hughes Medical Institute.
The study was supported by grants from the National Institutes of Health.


Changing the Color of Single Photons Emitted by Quantum Dots

Two important resources for quantum information processing are the transmission of data encoded in the quantum state of a photon and its storage in long-lived internal states of systems like trapped atoms, ions or solid-state ensembles. Ideally, one envisions devices that are good at both generating and storing photons. However, this is challenging in practice because while typical quantum memories are suited to absorbing and storing near-visible photons, transmission is best accomplished at near-infrared wavelengths where information loss in telecommunications optical fibers is low.

In new NIST experiments, the "color" (wavelength) of photons from a quantum dot single photon source (QD SPS) is changed to a more convenient shade using an up-conversion crystal and pump laser. To test that the process truly acts on single photons without degrading the signal (by creating additional photons), the output is split in two and sent to parallel detectors -- true single photons should not be detected simultaneously in both paths.

To satisfy these two conflicting requirements, the NIST team combined a fiber-coupled single photon source with a frequency up-conversion single photon detector. Both developed at NIST, the frequency up-conversion detector uses a strong pump laser and a special non-linear crystal to convert long wavelength (low frequency) photons into short wavelength (high frequency) photons with high efficiency and sensitivity (

According to Matthew Rakher and Kartik Srinivasan, two authors of the paper, previous up-conversion experiments looked at the color conversion of highly attenuated laser beams that contained less than one photon on average. However, these light sources still exhibited "classical" photon statistics exactly like that of an unattenuated laser, meaning that the photons are organized in such as way that at most times there are no photons while at other times there are more than one. Secure quantum communications relies upon the use of single photons.

"The quantum dot can act as a true single photon source," says Srinivasan. "Each time we excite the dot, it subsequently releases that energy as a single photon. In the past, we had little control over the wavelength of that photon, but now we can generate a single photon of one color on demand, transmit it over long distances with fiber optics, and convert it to another color."

Converting the photon's wavelength also makes it easier to detect, say co-authors Lijun Ma and Xiao Tang. While commercially available single photon detectors in the near-infrared suffer noise problems, detectors in the near-visible are a comparatively mature and high-performance technology. The paper describes how the wavelength conversion of the photons improved their detection sensitivity by a factor of 25 with respect to what was achieved prior to conversion.