Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........

Nerve-Electronic Hybrid Could Meld Mind and Machine

Nerve-cell tendrils readily thread their way through tiny semiconductor tubes, researchers find, forming a crisscrossed network like vines twining toward the sun. The discovery that offshoots from nascent mouse nerve cells explore the specially designed tubes could lead to tricks for studying nervous system diseases or testing the effects of potential drugs. Such a system may even bring researchers closer to brain-computer interfaces that seamlessly integrate artificial limbs or other prosthetic devices. “This is quite innovative and interesting,” says nanomaterials expert Nicholas Kotov of the University of Michigan in Ann Arbor. “There is a great need for interfaces between electronic and neuronal tissues.”

To lay the groundwork for a nerve-electronic hybrid, graduate student Minrui Yu of the University of Wisconsin–Madison and his colleagues created tubes of layered silicon and germanium, materials that could insulate electric signals sent by a nerve cell. The tubes were various sizes and shapes and big enough for a nerve cell’s extensions to crawl through but too small for the cell’s main body to get inside.

When the team seeded areas outside the tubes with mouse nerve cells, the cells went exploring, sending their threadlike projections into the tubes and even following the curves of helical tunnels, the researchers report in an upcoming ACS Nano. “They seem to like the tubes,” says biomedical engineer Justin Williams, who led the research. The approach offers a way to create elaborate networks with precise geometries, says Williams. “Neurons left to their own devices will kind of glom on to one another or connect randomly to other cells, neither of which is a good model for how neurons work.”

At this stage, the researchers have established that nerve cells are game for exploring the tiny tubes, which seem to be biologically friendly, and that the cell extensions will follow the network to link up physically. But it isn’t clear if the nerves are talking to each other, sending signals the way they do in the body. Future work aims to get voltage sensors and other devices into the tubes so researchers can eavesdrop on the cells. The confining space of the little tunnels should be a good environment for listening in, perhaps allowing researchers to study how nerve cells respond to potential drugs or to compare the behavior of healthy neurons with malfunctioning ones such as those found in people with multiple sclerosis or Parkinson’s.

Eventually, the arrangement may make it easier to couple living cells with technology on a larger scale, but getting there is no small task, says neuroengineer Ravi Bellamkonda of the Georgia Institute of Technology in Atlanta.

“There’s a lot of nontrivial engineering that has to happen, that’s the real challenge,” says Bellamkonda. “It’s really cool engineering, but what it means for neuroscience remains to be seen.”

By Rachel Ehrenberg, Science News 
From wired

Scientists learn about the brain’s ability to reorganize itself

A new study from The University of Michigan has found that after being disrupted, mouse brains are able to shift important functions tied to learning and memory.
So when Geoffrey Murphy, Ph.D., begins to talk about plastic structures, it’s important to know that he’s not talking about the actual plastics we use in our everyday lives. To Murphy, an associate professor of molecular and integrative physiology at the University of Michigan Medical School, plasticity is a reference to the brain’s ability to change as we learn.

According to a University of Michigan Health System press release, Murphy’s lab, in cooperation with U-M’s Neurodevelopment and Regeneration Laboratory run by Jack Parent, M.D., recently displayed how the plasticity of the brain permitted mice to reestablish important functions related to learning and memory after the scientists inhibited the animals’ ability to make certain new brain cells.
The research results, published online in the Proceedings of the National Academy of Sciences, bring scientists a bit closer to separating the ways which the brain deals with interferences and redirects neural functioning .
This could eventually lead to treatments for cognitive impairments in humans, which are caused by disease and aging.
“It’s amazing how the brain is capable of reorganizing itself in this manner,” says Murphy, co-senior author of the study and researcher at U-M’s Molecular and Behavioral Neuroscience Institute. “Right now, we’re still figuring out exactly how the brain accomplishes all this at the molecular level, but it’s sort of comforting to know that our brains are keeping track of all of this for us.”
In research that was conducted previously, the scientists had discovered that restricting cell division in the hippocampuses of mice using radiation or genetic manipulation led to reduced functionality in a cell mechanism important to memory formation known as long-term potentiation.
In this study though, the researchers proved that the interruption is temporary and within six weeks, the mouse brains were able to compensate for the disruption and reestablish plasticity, says Parent, the study’s other senior author, a researcher with the VA Ann Arbor Healthcare System and associate professor of neurology at the U-M Medical School.
When the ongoing growth of key brain cells in adult mice was stopped, the researchers found the brain circuitry compensated for the disruption by enabling existing neurons to be more active. The remaining neurons also had longer life spans than when new cells were constantly being made.
“The results suggest that the birth of brain cells in the adult, which was experimentally disrupted, must be really important – important enough for the whole system to reorganize in response to its loss,” Parent says.

By David Gomez 
From tgdaily

Mini lasers could bring new age to the Internet

Do you like lasers and the Internet? Then you’ll be excited about a new laser device created at the University of Central Florida that could improve high-speed computing and the Internet. 

According to the UCF Today press release, the laser will make computing faster and more reliable. It could potentially lead to a new age of the Internet. 

Professor Dennis Deppe’s miniature laser diode releases more intense light than the ones that are currently used. The light emits at a single wavelength, making it a useful component in compact disc players, laser pointers and optical mice for computers, in addition to high-speed data transmission.

The biggest challenge associated with these devices has been their failure rate. They usually don’t work very well when they face enormous workloads; the stress makes them snap. 

The smaller size and removal of non-semiconductor materials means that the new devices could possibly be used in heavy data transmission, which is important in the development of the new Internet age. By integrating laser diodes into the cabling of the future, gigantic amounts of data could be moved across vast distances instantaneously. By utilizing the small lasers in optical clocks, the accuracy of GPS and high-speed wireless data communications also would see an increase.  

“The new laser diodes represent a sharp departure from past commercial devices in how they are made,” Deppe said from his lab inside the College of Optics and Photonics. “The new devices show almost no change in operation under stress conditions that cause commercial devices to rapidly fail.”

“At the speed at which the industry is moving, I wouldn’t be surprised if in four to five years, when you go to Best Buy to buy cables for all your electronics, you’ll be selecting cables with laser diodes embedded in them,” he added.

Deppe and Sabine Freisem, a senior research scientist who has been collaborating with Deppe for eight years, presented their findings in January at the SPIE (formerly The International Society for Optical Engineering) Photonics West conference in San Francisco. 

“This is definitely a milestone,” Freisem said. “The implications for the future are huge.”
Unfortunately, there is still one issue that the team is trying to fix. The voltage needed to make the laser diodes function more efficiently must be enhanced.

Deppe said once that problem is fixed the uses for the laser diodes will increase rapidly. They could even be used in lasers in space to remove unwanted hair. Why wax or shave when you use a freaking space laser, right?
“We usually have no idea how often we use this technology in our everyday life already,” Deppe said. “Most of us just don’t think about it. With further development, it will only become more commonplace.”

The thought of upgraded cables with laser diodes in them is quite interesting. They say it will allow for faster data transfer, so it makes you wonder if connecting the high powered cables will be similar to an automatic upgrade of your computer. It also makes you wonder if computers and devices will have to be upgraded before they are able to handle the improved cables the UCF researchers are talking about.

It is an interesting development and it sounds like the nature of computing will take another huge step forward in the near future.

By David Gomez 
From tgdaily

Blood analysis chip detects diseases in minutes

A big step forward in microfluidics has helped researchers develop stand-alone, self-powered chips that can diagnose diseases within minutes. The Self-powered Integrated Microfluidic Blood Analysis System (SIMBAS) can process whole blood samples without the use of external tubing and extra components.

"This is a very important development for global healthcare diagnostics," says UC Berkeley professor of bioengineering Luke Lee. “Field workers would be able to use this device to detect diseases such as HIV or tuberculosis in a matter of minutes." 

The SIMBAS biochip uses trenches patterned underneath nanoscale microfluidic channels. When whole blood is dropped onto the chip’s inlets, the relatively heavy red and white blood cells settle down into the trenches, separating from the clear blood plasma. The blood moves through the chip in a process called degas-driven flow.

In experiments, the researchers were able to capture more than 99 percent of the blood cells in the trenches and selectively separate plasma using this method.

The team demonstrated the proof-of-concept of SIMBAS by testing a five-microliter sample of whole blood that contained biotin (vitamin B7) at a concentration of about 1 part per 40 billion.
The chip provided a readout of the biotin levels in 10 minutes.

"This is a very important development for global healthcare diagnostics. Field workers would be able to use this device to detect diseases such as HIV or tuberculosis in a matter of minutes. The fact that we reduced the complexity of the biochip and used plastic components makes it much easier to manufacture in high volume at low cost," says Lee. 

"Our goal is to address global health care needs with diagnostic devices that are functional, cheap and truly portable."

By Kate Taylor 
From tgdaily

Engineers Make Breakthrough in Ultra-Sensitive Sensor Technology

The sensor, which is the most sensitive of its kind to date, relies on a completely new architecture and fabrication technique developed by the Princeton researchers. The device boosts faint signals generated by the scattering of laser light from a material placed on it, allowing the identification of various substances based on the color of light they reflect. The sample could be as small as a single molecule.

Micrograph of a sensor developed at Princeton for sensing Raman scattering. Pillars support metal components that gather light and amplify Raman signals, wavelengths of light that can be used to identify a substance.

The technology is a major advance in a decades-long search to identify materials using Raman scattering, a phenomena discovered in the 1920s by an Indian physicist, Chandrasekhara Raman, where light reflecting off an object carries a signature of its molecular composition and structure.

"Raman scattering has enormous potential in biological and chemical sensing, and could have many applications in industry, medicine, the military and other fields," said Stephen Y. Chou, the professor of electrical engineering who led the research team. "But current Raman sensors are so weak that their use has been very limited outside of research. We've developed a way to significantly enhance the signal over the entire sensor and that could change the landscape of how Raman scattering can be used."

Chou and his collaborators, electrical engineering graduate students, Wen-Di Li and Fei Ding, and post-doctoral fellow, Jonathan Hu, published a paper on their innovation in February in the journal Optics Express. The research was funded by the Defense Advance Research Projects Agency.

In Raman scattering, a beam of pure one-color light is focused on a target, but the reflected light from the object contains two extra colors of light. The frequency of these extra colors are unique to the molecular make-up of the substance, providing a potentially powerful method to determine the identity of the substance, analogous to the way a finger print or DNA signature helps identify a person.

Since Raman first discovered the phenomena -- a breakthrough that earned him Nobel Prize -- engineers have dreamed of using it in everyday devices to identify the molecular composition and structures of substances, but for many materials the strength of the extra colors of reflected light was too weak to be seen even with the most sophisticated laboratory equipment.

Researchers discovered in the 1970s that the Raman signals were much stronger if the substance to be identified is placed on a rough metal surface or tiny particles of gold or silver. The technique, known as surface enhanced Raman scattering (SERS), showed great promise, but even after four decades of research has proven difficult to put to practical use. The strong signals appeared only at a few random points on the sensor surface, making it difficult to predict where to measure the signal and resulting in a weak overall signal for such a sensor.

Abandoning the previous methods for designing and manufacturing the sensors, Chou and his colleagues developed a completely new SERS architecture: a chip studded with uniform rows of tiny pillars made of metals and insulators.

One secret of the Chou team's design is that their pillar arrays are fundamentally different from those explored by other researchers. Their structure has two key components: a cavity formed by metal on the top and at the base of each pillar; and metal particles of about 20 nanometers in diameter, known as plasmonic nanodots, on the pillar wall, with small gaps of about 2 nanometers between the metal components.

The small particles and gaps significantly boost the Raman signal. The cavities serve as antennae, trapping light from the laser so it passes the plasmonic nanodots multiple times to generate the Raman signal rather than only once. The cavities also enhance the outgoing Raman signal.

The Chou's team named their new sensor "disk-coupled dots-on-pillar antenna-array" or D2PA, for short.
So far, the chip is a billion times (109) more sensitive than was possible without SERS boosting of Raman signals and the sensor is uniformly sensitive, making it more reliable for use in sensing devices. Such sensitivity is several orders of magnitude higher than the previously reported.

Already, researchers at the U.S. Naval Research Laboratory are experimenting with a less sensitive chip to explore whether the military could use the technology pioneered at Princeton for detecting chemicals, biological agents and explosives.

In addition to being far more sensitive than its predecessors, the Princeton chip can be manufactured inexpensively at large sizes and in large quantities. This is due to the easy-to-build nature of the sensor and a new combination of two powerful nanofabrication technologies: nanoimprint, a method that allows tiny structures to be produced in cookie-cutter fashion; and self-assembly, a technique where tiny particles form on their own. Chou's team has produced these sensors on 4-inch wafers (the basis of electronic chips) and can scale the fabrication to much larger wafer size.

"This is a very powerful method to identify molecules," Chou said. "The combination of a sensor that enhances signals far beyond what was previously possible, that's uniform in its sensitivity and that's easy to mass produce could change the landscape of sensor technology and what's possible with sensing."


Engineers create 3D microscope lens, see the tiny elephants in your ear

The ability to view tiny images in the third D has been made possible by Lei Li and Allen Yi of Ohio State University. The two have crafted a one-of-a-kind 3D lens that, unlike other three-dimensional microscopes that capture images by circling around the subject, sees teeny objects while stationary. Although the engineers crafted the lens on a precision cutting machine using a diamond blade themselves, they say it can be produced using traditional molding methods. At the size of a fingernail, the thermoplastic material, aka acrylic glass, was cut with 10 nanometer spacing (that's tiny) to ensure a flat plane. The top is surrounded by eight facets -- sort of like a gem stone, but not symmetric -- allowing the viewer to see 9 different angles at once. This should pave way for scientists to get better angles of microscopic objects, but they can always try using the 3DS and some DIY lens attachments, right?

By Sam Sheffer
From engadget 

DARPA M3 program to make cheaper, more mobile robots for the US war machine

DARPA, that governmental black magic factory that gave us the flying Humvee and Hummingbird spybot, has unveiled its new Maximum Mobility and Manipulation Program (M3) program that plans to put us on the fast track to our robotic future. M3 aims to improve robotic research through four specialized development programs -- design tools, fabrication, control, and prototype demonstration -- that divvy up the work between commercial labs and universities. The program will not replace existing bionic projects, but some, like the Autonomous Robotic Manipulation (ARM) program, will be folded into the new scheme. DARPA anticipates that the plan will result in cheaper bots superior to those we have today, but not superior to man... we hope. 

By Michael Gorman
From engadget 

Bug Creates Butanol Directly from Cellulose

Butanol—a promising next-generation biofuel—packs more energy than ethanol and can be shipped via oil pipelines. But, like ethanol, biobutanol production is focused on using edible feedstocks such as beets, corn starch, and sugarcane.

 Super bug: Genetic engineers reprogrammed this microbe, Clostridium cellulolyticum, to turn cellulose into butanol.

Now James Liao, a biomolecular engineer at the University of California, Los Angeles, has developed two routes to liberate butanol from its dependence on food crops. Liao, who has a track record for commercializing innovative biofuels processes, has proven that microbes can produce the advanced biofuel directly from agricultural wastes, as well as from protein feedstocks such as algae. 

Liao's demonstration of direct cellulose-to-butanol conversion could bring down the cost of cellulosic biofuels, which is currently prohibitively high. His protein-based process provides the biofuels field with entirely novel feedstock options. 

While they're renewable, biofuels face attacks from environmental and food activists, and biobutanol is no exception: the first generation of biobutanol plants under development will run on corn-based sugar and starch. "Butanol has some technical benefits, but the real problem is the amount of food that goes into making a gallon of fuel," says Jeremy Martin, a senior scientist at the Union of Concerned Scientists, a Cambridge, Massachusetts-based advocacy group that is part of a broad coalition pushing Congress to end lucrative tax credits for corn ethanol. 

Liao's innovations could end biobutanol's association with corn—an association that, ironically, is partly of his making. In 2008, Liao developed a microbial pathway for converting sugar into isobutanol, a high-octane isomer of butanol. That innovation is now being commercialized by Gevo, an Englewood, Colorado-based startup that Liao cofounded. Gevo raised $107 million in an IPO last month to support its plans to retrofit corn ethanol plants to produce isobutanol instead.

Plans for a shift to biofuels production from biomass feedstocks such as switchgrass, corn stalks, and sugarcane bagasse (or plant residue) are, meanwhile, moving slowly because of higher costs. The U.S. Environmental Protection Agency mandated use of just 6.6 million gallons of cellulosic ethanol this year—less than 3 percent of the 250-million-gallon goal set by Congress four years ago. The holdup is from added processing steps required to break down these cellulosic feedstocks and thus generate sugars for fermentation; the processing boosts costs considerably, making production facilities difficult to finance.

Liao's direct cellulose-to-butanol process, developed in collaboration with researchers at Oak Ridge National Laboratory, promises to simplify things by expanding the capabilities of fermentation microbes. The key was adding Liao's sugar-to-isobutanol pathway to a microbe, Clostridium cellulolyticum, that likes chewing on biomass but does not normally make butanol. The microbe was originally isolated from composted grass, and two years ago, the U.S. Department of Energy's Joint Genome Institute completed a sequence of its genome.
The result of the genetic engineering, published this month in the journal Applied and Environmental Microbiology, is a single organism that takes in cellulose and cranks out isobutanol. Liao says the output and conversion rate are low, but says this "proof of principle" is likely the trickiest part of the development process. "The rest is relatively straightforward. Not trivial, but straightforward. It becomes a matter of funding and resources," says Liao. 

The next step is to move the genetic modifications to a faster-growing variant of Clostridium or some other microbe. Liao bets the technology could be production-ready in as little as two years.

One speed bump that could slow things down is litigation over rights to use Liao's technology. Gevo is being sued for patent infringement by competitor Butamax Advanced Biofuels, a joint venture between BP and DuPont that, like Gevo, plans to convert corn-based ethanol plants to isobutanol. Butamax alleges that Gevo's use of genetic engineering to make butanol violates a broad U.S. patent issued to Butamax in December 2010. 

Another obstacle is concern about the environmental impact of heavy biomass use. In January, the EPA issued a draft report to Congress on the environmental impacts from biofuels production. The report outlined several concerns with production of biomass-based fuels. It noted that using corn stover (the leaves and stalks left after harvest) to produce fuels, instead of plowing the stover back into farmlands, could result in soil degradation and choke streams and rivers with increased runoff. Environmental activists have raised concerns about the cultivation of marginal lands that have been set aside to boost biodiversity and provide protective barriers around water bodies.

Liao's demonstration of genetically engineered E. coli that can turn protein into isobutanol also provides a potential alternative to biomass feedstocks: fast-growing photosynthetic algae. Current R&D projects developing algae-based biofuels seek to convert algal-produced fats, which make up about a quarter of algal mass. Proteins, in contrast, make up roughly two-thirds. 

It would be possible, says Liao, to create a recycling production system in which isobutanol-producing microbes are sustained by algal protein as well as industrial fermentation residues recovered from prior rounds of butanol production. Like algae, fermentation residues are composed largely of proteins.                                                    
"These results show the feasibility of using proteins for biorefineries," Liao and UCLA colleagues wrote this month in the journal Nature Biotechnology.

Liao says protein-fed biorefineries cranking out isobutanol are probably five to 10 years from realization, so cellulosic isobutanol is likely to come first. He acknowledges that algae-based protein feedstocks may, like cellulosic biomass, turn out to have unforeseen costs. But one thing is certain, says Liao: "They're certainly much more sustainable than petroleum or coal or sugar." 

By Peter Fairley
From Technology Review

A New Way to Churn Out Cheap LED Lighting

A startup in California has developed a manufacturing technique that could substantially cut the cost of LED lightbulbs—a more energy-efficient type of lighting.

 Light the way: Growing gallium nitride on silicon wafers could cut the cost of producing white LEDs.

LEDs are conventionally made on a relatively costly substrate of silicon carbide or sapphire. Bridgelux has come up a new process takes advantage of existing fabrication machines used to make silicon computer chips, potentially cutting LED production costs by 75 percent, according to the company.

Despite their higher efficiencies and longer life, few homes and businesses use LED lighting—largely because of the initial cost. An LED chip makes up 30 to 60 percent of a commercial LED lightbulb. Electronic control circuits and heat management components take up the rest. So for a 60-watt equivalent bulb that costs $40, Bridgelux's technology could bring the cost down by $9 to $18. Integrating the light chip with the electronics might further reduce costs.

LEDs made with the new technique produce 135 lumens for each watt of power. The U.S. Department of Energy's Lighting Technology Roadmap calls for an efficiency of 150 lumens per watt by 2012. Some LED makers, such as Cree, in Durham, North Carolina, already sell LED lamps with efficiencies in that range. In contrast, incandescent bulbs emit around 15 lumens per watt, and fluorescent lightbulbs emit 50 to 100 lumens per watt.

Manufacturers typically make white LEDs by coating blue gallium-nitride devices with yellow phosphors. The gallium nitride is grown on two- to four-inch sapphire or silicon carbide wafers. Cree builds its chips on silicon-carbide wafers, "because we believe it produces superior LEDs," says company spokesperson Michelle Murray.

Larger wafers mean more devices fabricated at once, which brings down cost. But large sapphire or silicon carbide wafers are more difficult, and expensive, to make. Companies such as Osram Opto Semiconductors in Germany are now moving to 15-centimeter sapphire wafers, most likely the largest size possible. Making 20-centimeter silicon wafers, on the other hand, is routine in the semiconductor chip-making industry. Bridgelux's new silicon wafers were, in fact, made at an old silicon fabrication plant in Silicon Valley.

It is hard to grow gallium nitride on silicon, mainly because the materials expand and contract at very different rates, explains Colin Humphreys, a materials science researcher at Cambridge University. The process is carried out at temperatures around 1,000 °C, and, upon cooling, the gallium nitride cracks because it is under tension, Humphreys says. One way to solve the problem is to insert additional thin films around the gallium nitride to compress the material and balance out the tension produced during cooling. In fact, Humphreys and his colleagues have used this trick to make gallium-nitride LEDs on silicon; their devices produce 70 lumens per watt. Bridgelux might be using a similar technique. "The result from Bridgelux is impressive," Humphreys says. "It offers the promise of a large cost reduction without any reduction of efficiency."

Other LED makers, including Osram, are also trying to make gallium-nitride LEDs on silicon. Bridgelux expects to deliver its first commercial silicon-based LEDs in two to three years.

By Prachi Patel
From Technology Review

Silicon Art Hidden Inside Samsung’s Galaxy Tab

 Engineers hid a microscopic warning deep within an Infineon chip.

Silicon chips have billions of transistors in every square inch. But sometimes there’s enough room left over for chip engineers to insert a little joke.

While using a scanning electron microscope to examine the microcircuitry of a chip found in Samsung’s Galaxy Tab and Galaxy S phone, consulting company Chipworks discovered a surprise.

Underneath six layers of aluminum and silicon dioxide circuitry, almost at the level of the polysilicon wafer that underlies the entire chip, engineers concealed a tiny, tiny message.

Below the letters IFX (the stock symbol for Infineon, the company that makes the chip) is a tiny warning, made out of letters just two microns (2 µm) high:

“You would never find this message unless you were seriously looking for it,” says Chipworks marketing manager Rob Williamson.

The chip, the Infineon PMB5703, provides radio-frequency transmission and reception functions relating to the devices’ baseband and 3G features.

The tiny message is hidden in the upper right corner of the Infineon chip

Chipworks has put many chips under the scanning electron microscope and has discovered dozens of hidden images and messages like this one. Constructed of the same materials as the chip’s circuitry — silicon dioxide, aluminum, copper and the like — the artwork can include cartoons, icons, or merely the initials of the chips’ designers.

In many cases, this artwork is not only tiny, it’s completely invisible unless you are disassembling the chip. Before it found this message, for instance, Chipworks had to delaminate the chip, layer by layer, putting each layer under the microscope. The purpose of that project was to understand the chip’s architecture, not to find hidden messages, but sometimes these Easter eggs pop out.

The makers of the Infineon PMB5703 must have had some extra time on their hands, because Chipworks found no less than four other images on the chip, including a smiley face, a drummer, a baby duck  called Calimero and a smiling dragon named Grisu.

By Dylan Tweney

Potential Health Effects of Radiation Exposure

The stricken Fukushima Daiichi nuclear power plant in Japan has raised alarm over the possible health effects. However, so far, radiation levels outside of the plant remain relatively low and unlikely to cause health problems.

 Health scare: People being screened for radiation exposure at a testing center in Koriyama City, Japan.

The health effects of radiation depend on the dose a person receives. The acute effects of radiation sickness usually begin when an individual receives a dose of radiation that is one sievert (the standard international measurement of radiation exposure) or above. Most of the workers hospitalized after the nuclear disaster that destroyed a reactor in Chernobyl in 1986 received estimated doses of between one and six sieverts. Because such levels are rarely encountered, radiation levels are most often given in millisieverts (one thousandth of a sievert) or microsieverts (one-millionth of a sievert). For comparison, a chest x-ray delivers about 0.2 millisieverts of radiation, and the average person in the U.S. is exposed to about six millisieverts of radiation per year, about half of which is from natural sources and another half from medical procedures.

Radiation levels at the Fukushima plant have fluctuated widely. The highest emissions levels so far are 400 millisieverts per hour—rates that are high enough to cause symptoms of radiation sickness within two or three hours. But that level quickly dropped, and other readings have been far lower. On Tuesday, measurements at the gate of the power plant ranged from 0.6 millisieverts per hour to 11.9 millisieverts per hour, according to the International Atomic Energy Agency. The levels at the gate were at 1.9 millisieverts per hour on Wednesday, according to the Japan Atomic Industrial Forum. Radiation levels, however, are changing rapidly, and there have been reports of rapidly rising levels due to problems with stored spent fuel rods. [UPDATE, 3/17/2011:  There have been reports that rates of 250 millisieverts/hour have been measured above the plant.]

The readings taken outside the plant don't necessarily reflect the exposure to people working inside. Levels may be higher closer to the reactors, but workers are wearing protective clothing and using monitors to estimate their personal exposure, which they can limit by retreating to protected control rooms. Japanese authorities recently raised the maximum dose limit on the workers to 250 millisieverts, or five times the annual dose allowed for workers in the U.S.

People exposed to very high levels of radiation in a short amount of time are at risk for acute radiation syndrome, which can be fatal. William McBride, professor of radiation oncology at University of California, Los Angeles, says that at a radiation exposure of about one sievert, a person begins to experience sickness after an initial delay of a day or more. The most common symptoms are nausea and vomiting, diarrhea, and fever, and the illness often resolves within days. 

At higher doses, symptoms become more severe and can lead to long-term health consequences or death. Radiation first affects cells that divide rapidly, including blood cells and the cells lining the gastrointestinal tract. At four or five sieverts, the effects can be life-threatening, and may include a need for a bone marrow transplant, or the use of bone marrow growth factor stimulants to avoid death within two to eight weeks. At higher doses, around 10 sieverts, McBride says, the intestines stop functioning properly, and this may cause death within a few weeks. At even higher doses, blood vessels become leaky and the brain is affected, likely causing death within 24 hours. 

In terms of potential health dangers from radiation from the Fukushima Daiichi Nuclear Power Station, "the people who are in the most immediate danger from acute and severe radiation doses are those people who are on site at the moment and who are desperately trying to keep the reactors under control," says Jacqueline Williams, a radiation oncologist at the University of Rochester Medical Center. 

Moving away from the immediate vicinity of the plant, radiation levels drop very rapidly. James Thrall, radiologist-in-chief at Massachusetts General Hospital, says that radiation levels are inversely proportional to the square of the distance from the source: The level at two miles from the source are one-quarter what they are at one mile, and "at 10 miles away, it's almost an infinitesimal fraction," he says. Individual exposure also varies widely depending on whether a person is outside or indoors, or shielded with protective clothing. Japanese authorities have evacuated the population living within a 20-kilometer radius of the plant, and have warned those living within 30 kilometers to stay indoors. Some experts say that people living beyond this range have no cause for concern at this time. "This has nothing to do with the general population," McBride says.

The trickier question is whether lower doses of radiation—well below the threshold of acute illness—could lead to long-term health consequences for those in that area. Thrall says that epidemiological studies on survivors of nuclear attacks on Japan have found that those receiving 50 millisieverts or more had a slightly elevated cancer risk—about 5 percent higher than expected—and that risk seemed to rise with higher exposures. But scientists still vigorously debate whether that risk can be extrapolated down to even lower exposures.

After the nuclear disaster at Chernobyl, the population experienced a surge in thyroid cancers in children. However, scientists found that the culprit was not radiation in the air but radioactive contamination of the ground, which eventually found its way into cow's milk. Thrall points out that in Japan, this is highly unlikely because the authorities are carefully monitoring the water and food supplies and keeping the public informed, which did not happen at Chernobyl.

Another important factor in determining the potential health consequences is the type of radioactive isotopes released from the plant. Different isotopes have vastly different half-lives; some decay almost instantly, while others persist for weeks or years. Iodine-131, which has been detected at the site, has a half-life of eight days, while caesium-137, also detected, has a half-life of 30 years. Japanese authorities have distributed over 200,000 doses of potassium iodine tablets, which can help prevent the risk of thyroid cancer from radioactive iodine. However, Thrall says, the pills can cause unpleasant side effects and rare serious conditions in people with allergies or thyroid problems, so they should not be taken indiscriminately.

There are major differences between the type of reactor at Chernobyl and the one at Fukushima, according to Peter Caracappa, a professor in the Radiation Measurement and Dosimetry group at the Rensselaer Polytechnic Institute. During an online Q&A hosted by Reuters, he said "the first difference is that at Chernobyl, there was a large quantity of graphite in the core which caught fire and spread contents of the reactor high into the air." The Japanese plant  uses water rather than graphite. Second, the Chernobyl reactor did not have a containment structure like the ones present at these plants. Such structures, he said, "are designed to contain the contents of the reactor even in the case of an accident. A failure of containment, if it should come to pass, may allow materials from the core to 'leak' out, but they would not 'spew' out in the same way as Chernobyl." That could limit how far the radiation spreads.

By Courtney Humphries
From Technology Review

Wearable Scanner Opens New Frontier in Neuroscience

A tiny wearable scanner has been used to track chemical activity in the brains of unrestrained animals for the first time. By revealing neurological circuitry as the subjects perform normal tasks, researchers say, the technology could greatly broaden the understanding of learning, addiction, depression, and other conditions.
The device was designed to be used with rats—the main animal model used by behavioral neuroscientists. But the researchers who developed the device, at Brookhaven National Laboratory, say it would be straightforward to engineer a similar device for people.

 Wearable PET: A rat’s head fits in the circular opening of this device, which is surrounded by miniaturized detectors and electronics.

Positron emission tomography, or PET, is already broadly used in neuroscience research and in clinical treatment. It allows researchers to track the location of radioactively labeled neurotransmitters (the chemicals that carry signals between neurons) or drugs within the brain. Images of the way neurotransmitters and drugs move through the brain can reveal the processes that underpin normal behavior such as learning as well as pathologies including addiction. PET has been used to map drug-binding sites in the brains of addicts and healthy people, and to study how those sites change over time and with therapy.

A conventional PET scanner is so large that these studies have to be performed with the subject lying inside a large tube. Large photomultiplier tubes amplify signals from gamma rays emitted by labeled chemicals in the brain. The signals then pass through a desk-sized rack of electronics that process them and map them to a particular region of the brain. To get good readings during animal studies, the subjects are typically anaesthetized or restrained. What's being measured is not normal waking behavior.

"We have very limited data about what brains do in the real world," says Paul Glimcher, professor of neuroscience, economics, and psychology at New York University. Glimcher was not involved with the work.
The new portable scanner is designed to provide the same information about brain chemistry while an animal behaves naturally. It is small and lightweight enough that a rat can carry it around on its head. "[The rat] can move freely, interact with other animals, and at the same time we can make a 3-D map of, for example, dopamine receptors throughout the brain," says David Schlyer, a senior scientist at Brookhaven who led the work.

Schlyer's group worked for years to engineer a miniature PET scanner that could be worn by a moving subject. The device consists of a metal ring hanging from a support structure that helps support its weight and allows the rat to move around. The rat's head goes inside the ring, which contains both detectors and electronics.

The key to miniaturizing the device, Schlyer says, was integrating all the electronics for each detector in the ring on a single, specialized chip. An avalanche photodiode also replaces the large photomultiplier tubes of conventional PET, amplifying the signals emitted by the labeled chemicals in the brain. "The rats take about an hour to acclimate, then begin behaving normally," says Schlyer. The Brookhaven device is described this week in the journal Nature Methods.

The Brookhaven group used the scanner to map the dopamine receptors throughout the entire brains of of moving rats for the first time. Other groups, including Glimcher's, have previously used invasive probes to study dopamine levels in cubic-millimeter-sized portions of the brain in unrestrained animals, but have not been able to look at the entire brain.

Glimcher describes one of several experiments that could be done with the portable device. Researchers know that addicts who have successfully completed rehab are at great risk of relapse if they visit the places they associate with the drug, probably because their brain has been chemically rewired to respond to these associations. Glimcher imagines studies in rats that map brain chemistry when the animals are allowed to decide whether or not to take a drug, and when they wander into a location they have learned to associate with the drug.

"We don't really understand that well how circuits in [different parts of the brain] interact in addiction," says Glimcher. "To even get to a place where I can give you a clinical hypothesis, we have got to get more basic information. This is the breakthrough that could make that possible."

PET is not as broadly used in studies involving people as other neuroimaging methods because of the small but significant exposure to radiation that's necessary. Still, the Brookhaven researchers say it would be possible to make a wearable PET scanner that fits inside something resembling a football helmet. Joseph Huston, chair of the Center for Behavioral Neurosciences at the University of Düsseldorf, says the Brookhaven group has done "an incredible service" to the neuroscience community in developing the device. "The rat is the most important model for the brain—everything basic [we know] about learning, feeding, fear, sex, is based on work in the rat." 

Schlyer says his group has talked with a few companies about licensing a commercial version of the device. But for now, they are mainly planning further behavioral studies in their lab. Mapping dopamine in waking animals could provide insights into a wide range of normal and pathological conditions such as the movement problems associated with Parkinson's disease. But dopamine is just one of the many brain chemicals the group can map. Schlyer says they will also study the sexual behavior of rats. 

The group is also working on another instrument that combines PET with magnetic resonance imaging to provide richer information about tissue structure and function. They will start a clinical trial of this device in breast cancer patients next month. 

By Katherine Bourzac
From  Technology Review

Tiny Endoscopy Camera is as Small as a Grain of Rice

Tiny video cameras mounted on the end of long thin fiber optic cables, commonly known as endoscopes, have proven invaluable to doctors and researchers wishing to peer inside the human body. Endoscopes can be rather pricey, however, and like anything else that gets put inside peoples' bodies, need to be sanitized after each use. A newly-developed type of endoscope is claimed to address those drawbacks by being so inexpensive to produce that it can be thrown away after each use. Not only that, but it also features what is likely the world's smallest complete video camera, which is just one cubic millimeter in size.

The prototype endoscope was designed at Germany's Fraunhofer Institute for Reliability and Microintegration, in collaboration with Awaiba GmbH and the Fraunhofer Institute for Applied Optics and Precision Engineering.

Ordinarily, digital video cameras consist of a lens, a sensor, and electrical contacts that relay the data from the sensor. Up to 28,000 sensors are cut out from a silicon disc known as a wafer, after which each one must be individually wired up with contacts and mounted to a lens.

In Fraunhofer's system, contacts are added to one side of the sensor wafer while it's still all in one piece. That wafer can then be joined face-to-face with a lens wafer, after which complete grain-of-salt-sized cameras can be cut out from the two joined wafers. Not only is this approach reportedly much more cost-effective, but it also allows the cameras to be smaller and more self-contained – usually, endoscopic cameras consist of a lens at one end of the cable, with a sensor at the other.

The new camera has a resolution of 62,500 pixels, and it transmits its images via an electrical cable, as opposed to an optical fiber. Its creators believe it could be used not only in medicine, but also in fields such as automotive design, where it could act as an aerodynamic replacement for side mirrors, or be used to monitor drivers for signs of fatigue.

They hope to bring the device to market next year.

By Ben Coxworth

Giftedness Linked to Prenatal Exposure of Higher Levels of Testosterone

Mrazik, a professor in the Faculty of Education's educational psychology department, and a colleague from Rider University in the U.S., have published a paper in Roeper Review linking giftedness (having an IQ score of 130 or higher) to prenatal exposure of higher levels of testosterone. Mrazik hypothesizes that, in the same way that physical and cognitive deficiencies can be developed in utero, so, too, could similar exposure to this naturally occurring chemical result in giftedness.

 A longstanding debate as to whether genius is a byproduct of good genes or good environment has an upstart challenger that may take the discussion in an entirely new direction. University of Alberta researcher Marty Mrazik says being bright may be due to an excess level of a natural hormone.

"There seems to be some evidence that excessive prenatal exposure to testosterone facilitates increased connections in the brain, especially in the right prefrontal cortex," said Mrazik. "That's why we see some intellectually gifted people with distinct personality characteristics that you don't see in the normal population."
Mrazik's notion came from observations made during clinical assessments of gifted individuals. He and his fellow researcher observed some specific traits among the subjects. This finding stimulated a conversation on the role of early development in setting the foundation for giftedness.

"It gave us some interesting ideas that there could be more to this notion of genius being predetermined from a biological perspective than maybe people gave it credit for," said Mrazik. "It seemed that the bulk of evidence from new technologies (such as Functional MRI scans) tell us that there's a little bit more going on than a genetic versus environmental interaction."

Based on their observations, the researchers made the hypothesis that this hormonal "glitch" in the in-utero neurobiological development means that gifted children are born with an affinity for certain areas such as the arts, math or science. Mrazik cautions that more research is needed to determine what exact processes may cause the development of the gifted brain.

He notes that more is known about what derails the brain's normal development, thus charting what makes gifted people gifted is very much a new frontier. Mrazik hopes that devices such as the Functional MRI scanner will give them a deeper understanding of the role of neurobiology in the development of the gifted brain.

"It's really hard to say what does put the brain in a pathway where it's going to be much more precocious," he said. "The next steps in this research lay in finding out what exact stimuli causes this atypical brain development."


Low-Power Memory from Nanotubes

A new type of nonvolatile memory based on carbon nanotubes has dramatically lower power requirements than current technology. It uses the nanotubes to read and write data to small islands of phase-change materials, which store information. With further development, the new technology could extend battery life in mobile devices and also make desktop computers more efficient.

  Itty bitty bits: Three low-power phase-change memory bits are positioned between carbon-nanotube electrodes that have been colorized. The middle bit is “off” and the other two are “on.” The bits are arrayed on a silicon substrate that has been colored light blue.

Nonvolatile memory stores information even when the power is switched off. The standard technology for it, flash memory, is used in smart phones, cameras, USB sticks, and fast-booting netbook computers. But the storage density of flash memory is reaching its limit because the transistors used to make flash memory arrays cannot be miniaturized any further. The power needed to write to flash is also a speed limitation, and it drains the batteries in portable devices.

The new nonvolatile memory, developed by Eric Pop, professor of electrical engineering and computer sciences at the University of Illinois at Urbana-Champaign, and colleagues, can hold more data than flash while requiring considerably less power.

A few replacements for flash are in development. The one that's closest to commercialization is phase-change memory. The "bits" in phase-change memory are small islands of materials called chalcogenides that switch between glassy and crystalline states when rapidly heated. The two phases have different electrical resistances that can represent "1" or "0"—the bits are read by passing a small current through an electrode to read the resistance. Samsung and Numonyx, a memory company owned by Micron, have both promised to release phase-change memory products soon.

While phase-change memory promises to be denser than flash, for the most part it still has relatively high power requirements. "The traditional drawback of phase-change memory is that you need significant heat to change the phase," says Victor Zhirnov, director of special projects at the Semiconductor Research Corporation.

The device designed by Pop and colleagues at the University of Illinois tackles the power consumption problem by allowing the phase-change memory bits to be further miniaturized. The smaller the chalcogenide bit, the less the energy required to heat it up and change its phase.

The limit in today's technologies isn't the phase-change material itself but the size of the electrical contacts needed to connect to the islands where information is stored. "The smallest contacts they have been able to make are about 50 nanometers," says Pop. His devices use narrow, highly conductive carbon nanotubes as the electrical contacts. The nanotubes range from one to five nanometers in diameter, and they connect to pieces of phase-change material that are about 10 nanometers in size.

Pop's group uses established methods to grow arrays of flat, parallel carbon nanotubes on silicon chips. Then they add metal electrical contacts to the nanotubes and zap the nanotubes with a large pulse of electricity until they snap in two, leaving a tiny gap between the two halves. The device is then coated with a phase-change material called GST (a compound of germanium, antimony, and terbium). Each bit consists of an island of GST in the gap in a nanotube. When a small amount of current is passed through the nanotubes, it heats the material, changing its phase. The nanotubes can also be used to read the bit.

It takes about 0.1 milliamps to switch a conventional phase-change memory bit. This week in the journal Science, Pop's group reports writing to their phase-change memory using 100th as much current. They've shown that they can write and rewrite each bit hundreds of times and have made arrays of about 100 bits. Pop says the next step is to demonstrate millions of read-write cycles and larger arrays.

If the device can be produced in high volume, it could benefit not just portable electronics but also desktops and mainframes, says Zhirnov. In today's computers, moving information between processors and memory, and reading and writing to memory, takes a lot of energy and generates waste heat. Bringing the logic and memory closer together, onto the same chip if possible, is a major goal in computing, says Zhirnov. 

Putting the two on one chip isn't possible with flash, because writing to flash memory requires voltages about 20 times higher than the voltages needed to operate digital logic. The nanotube devices, on the other hand, operate in the same voltage range as the transistors used for digital logic. In principle, this means they could be used to make chips that integrate memory and logic and operate very efficiently.

The Illinois researchers must also demonstrate that the memory is reliable. There's a lot of variation in the size and conductive properties of carbon nanotubes. Pop says that so far, these variations don't seem to matter in the memory devices. But that remains to be proved on a larger scale. 

By Katherine Bourzac
From  Technology Review

A Search Engine for the Human Body

A new search tool developed by researchers at Microsoft indexes medical images of the human body, rather than the Web. On  CT scans, it automatically finds organs and other structures, to help doctors navigate in and work with 3-D medical imagery.

  Inside out: A close up of a CT processed by new software from Microsoft.

CT scans use X-rays to capture many slices through the body that can be combined to create a 3-D representation. This is a powerful tool for diagnosis, but it's far from easy to navigate, says Antonio Criminisi, who leads a group at Microsoft Research Cambridge, U.K., that is attempting to change that. "It is very difficult even for someone very trained to get to the place they need to be to examine the source of a problem," he says.

When a scan is loaded into Criminisi's software, the program indexes the data and lists the organs it finds at the side of the screen, creating a table of hyperlinks for the body. A user can click on, say, the word "heart" and be presented with a clear view of the organ without having to navigate through the imagery manually.

Once an organ of interest has been found, a 2-D and an enhanced 3-D view of structures in the area are shown to the user, who can navigate by touching the screen on which the images are shown. A new scan can also be automatically and precisely matched up alongside a past one from the same patient, making it easy to see how a condition has progressed or regressed.

Criminisi's software uses the pattern of light and dark in the scan to identify particular structures; it was developed by training machine-learning algorithms to recognize features in hundreds of scans in which experts had marked the major organs. Indexing a new scan takes only a couple of seconds, says Criminisi. The system was developed in collaboration with doctors at Addenbrookes Hospital in Cambridge, U.K. The Microsoft research group is exploring the use of gestures and voice to control the system. They can plug in the Kinect controller, ordinarily used by gamers to control an Xbox with body movements, so that surgeons can refer to imagery in mid-surgery without compromising their sterile gloves by touching a keyboard, mouse, or screen.

Body search: This CT image shows organs and other features identified by the Microsoft software. A list of these features appears at left.

Kenji Suzuki an assistant professor at the University of Chicago, whose research group works on similar tools, says the Microsoft software has the potential to improve patient care, providing it really does make scans easier to navigate. "As medical imaging has advanced, so many images are produced that there is a kind of information overload," he explains. "The workload has grown a lot."

Suzuki says Microsoft's approach is a good one, but that medical professionals might be more receptive to the design if it indexed signs of disease, not just organs. His own research group has developed software capable of recognizing potentially cancerous lung nodules; in trials, it made half as many mistakes as a human expert.

Criminisi sticks by the notion of using organs as a kind of navigation system but says that disease-spotting capability is also under development. He says, "We are working to train it to detect differences between different grades of glioma tumor"—a type of brain tumor.

The Microsoft group also intends the tool to be used at large scales. It could automatically index a collection of 3-D scans or other images, making possible new ways of tracking medical records, says Criminisi. Today, records are kept as text that describes scans and other information. A search tool that finds the word "heart", for example, would not know if that meant it appeared in a scan or was mentioned in another context. If a hospital's computer system indexed new scans, the Microsoft software could automatically record what was imaged in a person's records and when. 

By Tom Simonite
From  Technology Review

A Better Disc for Back Repair

The back pain caused by a damaged intervertebral disc often requires surgery, which means either replacing the disc with a plastic or metal implant or removing the disc and fusing the adjacent ones together. A new type of replacement disc—consisting of a scaffold seeded with living cells—could relieve back pain without many of the side effects caused by existing surgical approaches. 

  Cold printing: The material shown here acts as a scaffold for a artificial intervertebral disc containing living cells.

Researchers at the Medical University of South Carolina made a prototype replacement disc by printing an outer scaffold and then seeding the scaffold with living cells. The scaffold closely mimics the intricately layered microstructure of a real intervertebral disc, and is the first step toward making an implant that can perform the same supportive and shock absorbing functions as the original.Compared to the metal and plastic implants used today, an artificial scaffold swathed in living tissue could repair itself, and constant access to blood supply would reduce the risk of infection after surgery.

An intervertebral disc, or IVD, is shaped like a jelly donut, with an soft, elastic center and a tougher, fibrous outer layer. Sandwiched between vertebrae in the spine, the disc defines and supports the spine's movement, holding bones in place while allowing the spine as a whole to bend and twist. The discs also act as shock absorbers, cushioning impacts to the spine. When a disc becomes worn, pressure along the spine is unevenly distributed, and if the vertebrae shift even slightly, they stretch the nerves circling the spine, causing pain. If exercise and physical therapy offer no relief, surgery may be required. 

Spinal fusing, however, restricts bending and twisting in the fused section of the spine, so some surgeons make a strong case for the implant method. "None of us were born with fused spines," says Barton Sachs, a professor of orthopedics at the Medical College of South Carolina, who routinely performs disc-implant surgery and who was not connected with the new work. Fusing two bones together can increase the pressure on neighboring segments, wearing out other discs, Sachs says. Not only does an implant preserve motion, but the recovery time from implant surgery is shorter. "It works extremely well," says Sachs. "[Patients] get out of the hospital faster; they get back to their lifestyles faster."  But the implants currently used do not absorb shock. "You're putting in materials that look medieval, and that's the state of current clinical practice," says Robert Mauck, professor of orthopedic surgery and tissue engineering at the University of Pennsylvania. Mauck is working on a competing improvement to disc implants.

Researchers at the Medical University of South Carolina, led by Xuejun Wen, professor of bioengineering and regenerative medicine at Clemson University and the Medical University of South Carolina, tried to closely mimic the natural architecture of the disc, so that it can perform the same functions as the original. 

First, they modeled the complex inner structure of the disc on a computer. Then they extruded dissolved polyurethane through a fine glass micropipette tip onto a platform kept at -4 degrees Celsius. The cool temperature of the base caused each printed layer to solidify quickly and allowed successive layers to stack on and maintain their shape. "If you don't cool it really quickly, you won't get the structure you want," says Wen. Finally, the group seeded bovine cells on the scaffold, to test if the structure supported cell growth. These grew to fill it over 19 days, after which the cells were found to have arranged themselves as they would in a natural disc. 

"It's a clever application of additive manufacturing and an exciting piece of work," says James Iatridis, professor of orthopedics and neurosurgery at the Mount Sinai School of Medicine, in New York. But Iatridis adds that "several alternate approaches involving biological repair are nearing clinical trials or at more advanced stages of development."

While the new work by Wen and his group comes closest to replicating the microstructure of a real disc, its performance has yet to be tested. In the next few months, the discs will be tested in rats. "We're still trying to understand how complicated our engineered solutions need to be in order to restore function," says Mauck. 

By Nidhi Subbaraman
From  Technology Review

Engineered Viruses Boost Memory Recall in Mice

Memories fade with time, often to the annoyance of those who can’t recall important details. But scientists have now found a way to boost the recall of memories even after they’ve started to fade. Unfortunately, the method involves injecting an engineered virus directly into the brain, so those of us who are bad with names may want to wait a bit for the technique to be refined. The work was done in rats, and the memories in question are associations between a specific taste — saccharine, for example — and an unpleasant stimulus, caused by injection of a nausea-inducing drug (the approach is called “conditioned taste aversion”). Unless the unpleasant association is reinforced, the memories will slowly fade with time, although the aversion doesn’t disappear entirely during the two-week period that the authors were looking at.

Two years ago, the same authors found that it was possible to radically accelerate this fading. By injecting a chemical that blocked a specific brain enzyme (protein kinase M ζ), the authors caused the rats to act as if they had never experienced the nausea, even if the memory manipulation took place 25 days after the conditioning. Most chemicals that interfere with memories tend to prevent them from being consolidated for long-term storage, but this chemical seemed to work even after the memory was firmly in place.

That’s potentially helpful, since some people have formed negative associations with harmless or even helpful items. Still, for most of us, it would be nice to think that fading memories could be resuscitated. Apparently, they can. The researchers have now done what’s effectively the converse experiment, and increased the activity of protein kinase M ζ. They did this by engineering a virus to express the gene for the kinase, and then infected specific areas of the brain involved in memory. All the infected cells had additional copies of the gene, and thus made more of its product.

The virus had exactly the effect that the authors would presumably have predicted. The virus was injected a week after the rats were given the aversion conditioning, when the memory would already be starting to fade, and the memory tests were done a week after that, yet rats showed a significantly improved retention of their memories. As the authors point out, the engineered virus boosted a memory that was formed before it was even present. 

 The memory molecule, PKMzeta, overexpressed in rat neurons. Red (left) shows PKMzeta while green (middle) is a fluorescent protein that shows nerve cells have been infected by viruses engineered to boost the memory molecule. Yellow (right) shows both the memory molecule and green fluorescent protein only overexpress at certain locations in the neuron. Weizmann Institute of Science/Science

Actually, you can make that memories, plural. The authors trained rats to avoid both saccharine and salty liquids over the course of three days, and then injected the virus a week after the last training. The memories of both of these trainings were enhanced by the presence of the viral protein kinase M ζ gene.

The authors can’t tell exactly what protein kinase M ζ is doing to increase the recall of memories, and suggest it could be either enhancing the association between taste and the unpleasant experience, or simply enhancing recall in general. Although they don’t mention it, their findings may also be limited to specific classes of memories, like the associations examined here.

That latter point makes the last sentence of the paper a bit over the top, as the authors suggest that a chemical that enhances protein kinase M ζ activity might make for a good treatment for memory disorders like amnesia and age-related decline. Until we have a clearer sense of how many types of memories it works for, that’s a bit premature. Fortunately, there are lots of ways to test the recall abilities of animals, many of which don’t involve negative associations. Hopefully, testing of the virus’ more general impact on memory is already underway.

Image: HIV (green dots), a member of the lentivirus genus. (C. Goldsmith/P. Feorino/E. L. Palmer/W. R. McManus/CDC)
Citation: “Enhancement of Consolidated Long-Term Memory by Overexpression of Protein Kinase Mζ in the Neocortex.” Reut Shema, Sharon Haramati, Shiri Ron, Shoshi Hazvi, Alon Chen,
Todd Charlton Sacktor and Yadin Dudai.
Science, Vol. 331, March 3, 2011. DOI:


New Kinds of Superconductivity? Physicists Demonstrate Coveted 'Spin-Orbit Coupling' in Atomic Gases

In the researchers' demonstration of spin-orbit coupling, two lasers allow an atom's motion to flip it between a pair of energy states. The new work, published in Nature, demonstrates this effect for the first time in bosons, which make up one of the two major classes of particles. The same technique could be applied to fermions, the other major class of particles, according to the researchers. The special properties of fermions would make them ideal for studying new kinds of interactions between two particles -- for example those leading to novel "p-wave" superconductivity, which may enable a long-sought form of quantum computing known as topological quantum computation.

 In an ultracold gas of nearly 200,000 rubidium-87 atoms (shown as the large humps) the atoms can occupy one of two energy levels (represented as red and blue); lasers then link together these levels as a function of the atoms' motion. At first atoms in the red and blue energy states occupy the same region (Phase Mixed), then at higher laser strengths, they separate into different regions (Phase Separated).

In an unexpected development, the team also discovered that the lasers modified how the atoms interacted with each other and caused atoms in one energy state to separate in space from atoms in the other energy state.

One of the most important phenomena in quantum physics, spin-orbit coupling describes the interplay that can occur between a particle's internal properties and its external properties. In atoms, it usually describes interactions that only occur within an atom: how an electron's orbit around an atom's core (nucleus) affects the orientation of the electron's internal bar-magnet-like "spin." In semiconductor materials such as gallium arsenide, spin-orbit coupling is an interaction between an electron's spin and its linear motion in a material.

"Spin-orbit coupling is often a bad thing," said JQI's Ian Spielman, senior author of the paper. "Researchers make 'spintronic' devices out of gallium arsenide, and if you've prepared a spin in some desired orientation, the last thing you'd want it to do is to flip to some other spin when it's moving."

"But from the point of view of fundamental physics, spin-orbit coupling is really interesting," he said. "It's what drives these new kinds of materials called 'topological insulators.'"

One of the hottest topics in physics right now, topological insulators are special materials in which location is everything: the ability of electrons to flow depends on where they are located within the material. Most regions of such a material are insulating, and electric current does not flow freely. But in a flat, two-dimensional topological insulator, current can flow freely along the edge in one direction for one type of spin, and the opposite direction for the opposite kind of spin. In 3-D topological insulators, electrons would flow freely on the surface but be inhibited inside the material. While researchers have been making higher and higher quality versions of this special class of material in solids, spin-orbit coupling in trapped ultracold gases of atoms could help realize topological insulators in their purest, most pristine form, as gases are free of impurity atoms and the other complexities of solid materials.

Usually, atoms do not exhibit the same kind of spin-orbit coupling as electrons exhibit in gallium-arsenide crystals. While each individual atom has its own spin-orbit coupling going on between its internal components (electrons and nucleus), the atom's overall motion generally is not affected by its internal energy state.

But the researchers were able to change that. In their experiment, researchers trapped and cooled a gas of about 200,000 rubidium-87 atoms down to 100 nanokelvins, 3 billion times colder than room temperature. The researchers selected a pair of energy states, analogous to the "spin-up" and "spin-down" states in an electron, from the available atomic energy levels. An atom could occupy either of these "pseudospin" states. Then researchers shined a pair of lasers on the atoms so as to change the relationship between the atom's energy and its momentum (its mass times velocity), and therefore its motion. This created spin-orbit coupling in the atom: the moving atom flipped between its two "spin" states at a rate that depended upon its velocity.
"This demonstrates that the idea of using laser light to create spin-orbit coupling in atoms works. This is all we expected to see," Spielman said. "But something else really neat happened."

They turned up the intensity of their lasers, and atoms of one spin state began to repel the atoms in the other spin state, causing them to separate.

"We changed fundamentally how these atoms interacted with one another," Spielman said. "We hadn't anticipated that and got lucky."

The rubidium atoms in the researchers' experiment were bosons, sociable particles that can all crowd into the same space even if they possess identical values in their properties including spin. But Spielman's calculations show that they could also create this same effect in ultracold gases of fermions. Fermions, the more antisocial type of atoms, cannot occupy the same space when they are in an identical state. And compared to other methods for creating new interactions between fermions, the spin states would be easier to control and longer lived.

A spin-orbit-coupled Fermi gas could interact with itself because the lasers effectively split each atom into two distinct components, each with its own spin state, and two such atoms with different velocities could then interact and pair up with one other. This kind of pairing opens up possibilities, Spielman said, for studying novel forms of superconductivity, particularly "p-wave" superconductivity, in which two paired atoms have a quantum-mechanical phase that depends on their relative orientation. Such p-wave superconductors may enable a form of quantum computing known as topological quantum computation.