Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........


A Brighter Future for Retinal Implants

The latest generation of retinal implants has shown striking promise in tests involving a handful of blind patients. The implants have enabled many subjects to recognize objects and obstacles and given one person the ability to read large print. Such advances mark a turning point after decades of slow progress. And experts now say that commercial devices may be just a couple of years away.
Future vision: The Argus II series retinal implant, shown here inside an eye, uses an array of 60 electrodes to deliver visual information to a user's brain.



Retinal implants are designed to replace the function of damaged light-sensing photoreceptor cells in the retina. In particular, they are aimed at treating degenerative diseases such as retinitis pigmentosa and age-related macular degeneration. Using an array of electrodes placed either beneath the retina or on top of it, the devices work by electrically stimulating the remaining cell circuitry in the retina to produce pixel-like sensations of light, called phosphenes, in the visual field.

Peter Walter at the University Eye Clinic at Aachen, who chaired the Artificial Vision symposium in Bonn, Germany, where results from several projects were presented last week, notes that optimistic claims have been made about retinal implants in the past. But he says the success of several long-term studies has given researchers confidence that the remaining challenges are more technological than biological. "Within two or three years we could have products available," Walter says.

Ongoing trials involving one device, the Argus II, a retinal implant developed by Second Sight of Sylmar, CA, have been so promising that the company is already preparing for the market. "We are going to be starting the work to get applications for CE marking in Europe and authorization in the U.S. from the FDA," says Gregoire Cosendai, the company's director of operations for Europe.

In the past it has often been unclear whether the phosphenes seen by patients were due to the implant functioning correctly or to other factors, such as the recovery of photoreceptors triggered by the trauma of surgery--a phenomenon known as the rescue effect. But now that researchers have moved away from acute implantations--implanting and removing the devices during the same surgical procedure--to chronically implanting them, it is possible to test them more rigorously. Such experiments are difficult and time-consuming, but they can establish when the phosphenes are only occurring in the parts of the retina where there are electrodes, says Walter. "If you switch off the device, then this effect disappears," he says.

Trials of the Argus II have shown that some limited vision can be restored to blind patients, helping them to recognize objects and make out doorways or roadsides. The first commercial devices will offer this kind of vision, says Cosendai. The Argus II consists of a small chip containing about 60 stimulating electrodes and a glasses-mounted camera that feeds images and power to the implant via a wireless induction loop.
Bionic eye: Images are fed to the Argus II implant chip from a camera via a wireless induction loop, with the receiver attached to the outside of the eyeball.



There is hope that the resolution and granularity of these devices can be improved further and that the devices can be made more self-contained. At last week's symposium, Eberhart Zrenner, director of the Institute for Ophthalmic Research at the University of Tübingen, in Germany, presented the results of a trial involving a patient who was able to read eight-centimeter-high letters, albeit with the assistance of a large magnifying device called a dioptre lens. This was achieved using a 3-millimeter-diameter implant made up of roughly 1,500 electrodes, each connected to a photocell. These photocells are used both to sense light and to power the electrodes, which means no external power or camera is needed.

Although Zrenner's device is compact, it is only designed for semichronic implantation, and it is unable to last within the body for long periods of time, says Mark Humayun, a retinal surgeon at the University of Southern California who is involved in the Argus II trials. What's more, Humayun says that reading text has been demonstrated before, albeit with considerably larger letters. "It translates into little useful reading vision, not only because letters are too big, but because it often takes 30 seconds to recognize a single letter," he says.

Cosendai says that, for now, the field is taking small steps and trying not to overstate the potential. Initially, he says, retinal implants will be used to merely help people navigate and orient themselves.

The signal-processing side of these implants remains a key technical challenge, says Cosendai. A patient's brain often needs to be retrained to adapt to the new stimulation.

Rolf Eckmiller, another researcher in the field at the University of Bonn, says that much remains to be done. "Progress has been made, but we have so far underestimated the amount of work involved," he says.

Seeing shapes and edges may help many people become more mobile, Eckmiller says, but it's a big leap to restoring full vision or even the ability to recognize faces or to read. "There's a difference between seeing and recognizing a banana, and seeing something that might be a banana," he says. Currently our understanding of the signals required to make this leap is lacking, he says.

By Duncan Graham-Rowe

Nanosensing Transistors Powered by Stress

Nanoscale sensors have many potential applications, from detecting disease molecules in blood to sensing sound within an artificial ear. But nanosensors typically have to be integrated with bulky power sources and integrated circuits. Now researchers at Georgia Tech have demonstrated a nanoscale sensor that doesn't need these other parts.

The new sensors consist of freestanding nanowires made of zinc oxide. When placed under stress, the nanowires generate an electrical potential, functioning as transistors.

Zhong Lin Wang, professor of materials science at Georgia Tech, has previously used piezoelectric nanowires to make nanogenerators that can harvest biomechanical energy, which he hopes will eventually be used to power portable electronics. Now Wang's group is taking advantage of the semiconducting properties of zinc oxide nanowires--the electrical potential generated when the new nanowires are bent, allowing them to act as transistors.

Stress sensor: This scanning-electron-microscope image shows a stress-triggered transistor in cross section. The zinc oxide nanowire, 25 nanometers in diameter, is embedded in a polymer (black area), leaving the top region free to bend.



The Georgia Tech researchers used a vertical zinc oxide wire 25 nanometers in diameter to make a field-effect transistor. The nanowire is partially embedded in a substrate and connected at the root to gold electrodes that act as the source and the drain. When the wire is bent, the mechanical stress concentrates at the root, and charges build up. This creates an electrical potential that acts as a gate voltage, allowing electrical current to flow from source to drain, turning the device on. Wang's group has tested various triggers, including using a nanoscale probe to nudge the wire, and blowing gas over it.

Wang's group is "unique in using nanostructures to make something like this," says Liwei Lin, codirector of the University of California, Berkeley Sensor and Actuator Center. Nanowire sensors could be used for high-end sensing devices such as fingerprint scanners, Lin suggests.

Previous nanowire sensors have been tethered at both ends, limiting their range of motion. Wang says that the freestanding nanowires resemble the sensing hairs of the ear. If grouped into arrays of different lengths, each responsive to a different frequency of sound, the nanowires could potentially lead to battery-free hearing aids, he says.

The next step is to make arrays of the devices. "This is challenging because you have to make the electrical contact reliable, but we will be able to do that," says Wang.

By Katherine Bourzac

Cleaning Up on Dirty Coal

The industrial boomtown of Dongguan in southeast China's Pearl River Delta could soon host one of the country's most sophisticated power plants, one that uses an unconventional coal-gasification technology to make the dirtiest coal behave like clean-burning natural gas. Its developers, Atlanta-based utility Southern Company and Houston-based engineering firm KBR, announced the licensing deal with Dongguan Power and Chemical Company this month.
Cheap coal: This demonstration plant in Wilsonville, AL, uses a transport gasifier to turn two tons of cheap, low-quality coal per hour into a clean-burning gas. A plant based on similar technology is scheduled for China.



Dongguan Power plans to implement the gasification scheme at an existing 120-megawatt natural-gas-fired power plant, turning it into an integrated gasification combined cycle (IGCC) plant that uses cheap, moisture-laden lignite coal. The retrofit should be operating in 2011. That will provide its developers with a demonstration to determine whether technology will work in larger IGCC plants and whether it is a process suitable to integrate carbon capture and storage technology, according to John Thompson, director of the Coal Transition Program for the Clean Air Task Force, a nonprofit environmental consulting firm based in Boston. "They want to show that this works," says Thompson.

Southern and KBR's gasification design can use dirty coal because, compared to other gasification reactors, it uses a relatively slow, low-temperature process. Conventional gasifiers, such as General Electric's and Shell's, rely on temperatures around 1,500 ºC to turn finely ground coal into a combustible mixture of carbon monoxide and hydrogen known as syngas. Unfortunately, such temperatures melt ash and other mineral contaminants in the coal, forming a glassy slag that eventually eats through the ceramic tiles that protect the reactors' steel walls. Even reactors using high-quality coal have to be taken out of service for installation of new tiles at least every three years. They are thus ill-adapted for lower-quality coals that would produce several times more slag.

Dongguan's gasifier will sidestep those issues by operating at just 925 ºC to 980 ºC, below the contaminant melting temperature, explains Randall Rush, Southern Company's general manager for gasification systems. Coal nevertheless gasifies completely at these lower temperatures because it spends twice as long in Southern and KBR's process.

The technology is an adaptation of the fluidized catalytic cracking employed in refineries since the 1940s, which processes petroleum by "transporting" it around a loop along with solid catalyst particles. In the gasification reactor, the incoming feed of fresh coal is transported with a looping flow of solid coal contaminants, primarily ash. The hot mass drives off most of the coal's energy content as syngas. The solids left over simply join the flow.

Southern and KBR began designing the technology in 1988 and, with support from the U.S. Department of Energy, started up a demonstration reactor at Wilsonville, AL, in 1996 that can gasify two tons of coal per hour. Four years ago they redesigned it, incorporating what they'd learned at Wilsonville. Rush says the result will be a comparatively reliable and affordable IGCC design. In the absence of slag, a reactor's ceramic lining should last 10 to 20 years, says Rush.

The technology is attractive to Dongguan Power because it can use coal that is cheaper and less desirable. Presentations by the firm note that a doubling in fuel costs between 2004 and 2006 eliminated the company's profit margin. And while Dongguan Power initially commissioned a reactor from the Chinese Academy of Sciences' Institute of Engineering Thermophysics, which built a demonstration-scale transport gasifier last year, the firm has now opted for KBR and Southern's design.

Dongguan is betting that its IGCC plant will become a standard for China as the country cracks down on emissions, and it has already laid plans for an 800-megawatt plant. Both projects await approval by China's National Development and Reform Commission, which controls the financial incentives needed to cover the higher cost of an IGCC plant - about double the price of a pulverized-coal plant. The agency has approved only one of about a dozen IGCC projects proposed to date - the 250-megawatt GreenGen project under construction in Tianjin.

IGCCs could take off faster in the U.S. thanks to a carbon cap-and-trade system under consideration by Congress. That is, if it passes and can push carbon prices high enough. Southern Company subsidiary Mississippi Power Company has hit a wall of protest with a proposal to build a 582-megawatt IGCC plant in Kemper County, MS, in advance of that price signal. Mississippi Power's plant would capture 65 percent of its carbon dioxide emissions, giving the lignite-fired plant a carbon footprint comparable to that of natural gas. That promise earned the $2.2 billion project $403 million in federal grants and tax breaks.

Thompson, at the Clean Air Task Force, argues that both the Dongguan and the Kemper projects must go forward because they provide a means of controlling carbon emissions from coal. He says environmentalists should recognize that coal use worldwide is not going away anytime soon, making carbon capture critical to achieving the very large reductions in greenhouse gas emissions required in the decades to come to minimize the ecological impacts of global climate change.

"If new technologies to tame the CO2 emissions from coal aren't widely deployed soon, everything the environmental movement has sought to achieve over the last century goes out the window," says Thompson. "If CCS isn't widely deployed, it's game over."


By Peter Fairley

Hyenas Cooperate, Problem-solve Better Than Primates

ScienceDaily (Sep. 29, 2009) — Spotted hyenas may not be smarter than chimpanzees, but a new study shows that they outperform the primates on cooperative problem-solving tests.

Captive pairs of spotted hyenas (Crocuta crocuta) that needed to tug two ropes in unison to earn a food reward cooperated successfully and learned the maneuvers quickly with no training. Experienced hyenas even helped inexperienced partners do the trick.

When confronted with a similar task, chimpanzees and other primates often require extensive training and cooperation between individuals may not be easy, said Christine Drea, an evolutionary anthropologist at Duke University.

Drea's research, published online in the October issue of Animal Behavior, shows that social carnivores like spotted hyenas that hunt in packs may be good models for investigating cooperative problem solving and the evolution of social intelligence. She performed these experiments in the mid-1990s but struggled to find a journal that was interested in non-primate social cognition.

Spotted hyenas may not be smarter than chimpanzees, but a new study shows that they outperform the primates on cooperative problem-solving tests.



"No one wanted anything but primate cognition studies back then," Drea said. "But what this study shows is that spotted hyenas are more adept at these sorts of cooperation and problem-solving studies in the lab than chimps are. There is a natural parallel of working together for food in the laboratory and group hunting in the wild."

Drea and co-author Allisa N. Carter of the Univ. of California at Berkeley, designed a series of food-reward tasks that modeled group hunting strategies in order to single out the cognitive aspects of cooperative problem solving. They selected spotted hyenas to see whether a species' performance in the tests might be linked to their feeding ecology in the wild.

Spotted hyena pairs at the Field Station for the Study of Behavior, Ecology and Reproduction in Berkeley, Calif. were brought into a large pen where they were confronted with a choice between two identical platforms 10 feet above the ground. Two ropes dangled from each platform. When both ropes on a platform were pulled down hard in unison -- a similar action to bringing down large prey -- a trap door opened and spilled bone chips and a sticky meatball. The double-rope design prevented a hyena from solving the task alone, and the choice between two platforms ensured that a pair would not solve either task by chance.

The first experiment sought to determine if three pairs of captive hyenas could solve the task without training. "The first pair walked in to the pen and figured it out in less than two minutes," Drea said. "My jaw literally dropped."

Drea and Carter studied the actions of 13 combinations of hyena pairs and found that they synchronized their timing on the ropes, revealing that the animals understood the ropes must be tugged in unison. They also showed that they understood both ropes had to be on the same platform. After an animal was experienced, the number of times it pulled on a rope without its partner present dropped sharply, indicating the animal understood its partner's role.

"One thing that was different about the captive hyena's behavior was that these problems were solved largely in silence," Drea said. Their non-verbal communication included matching gazes and following one another. "In the wild, they use a vocalization called a whoop when they are hunting together."

In the second and third experiments, Drea found that social factors affected the hyenas' performance in both positive and negative ways. When an audience of extra hyenas was present, experienced animals solved the task faster. But when dominant animals were paired, they performed poorly, even if they had been successful in previous trials with a subordinate partner.

"When the dominant females were paired, they didn't play nicely together," Drea said. "Their aggression toward each other led to a failure to cooperate."

When a naïve animal unfamiliar with the feeding platforms was paired with a dominant, experienced animal, the dominant animals switched social roles and submissively followed the lower-ranking, naïve animal. Once the naïve animal became experienced, they switched back.

Both the audience and the role-switching trials revealed that spotted hyenas self-adjust their behavior based upon social context.

It was not a big surprise that the animals were strongly inclined to help each other obtain food, said Kay Holekamp, a professor of zoology at Michigan State University who studies the behavioral ecology of spotted hyenas.

"But I did find it somewhat surprising that the hyenas' performance was socially modulated by both party size and pair membership," Holekamp said. "And I found it particularly intriguing that the animals were sensitive to the naïveté of their potential collaborators."

Researchers have focused on primates for decades with an assumption that higher cognitive functioning in large-brained animals should enable organized teamwork. But Drea's study demonstrates that social carnivores, including dogs, may be very good at cooperative problem solving, even though their brains are comparatively smaller.

"I'm not saying that spotted hyenas are smarter than chimps," Drea said. "I'm saying that these experiments show that they are more hard-wired for social cooperation than chimpanzees."

Researchers Go Underground To Reveal 850 New Species In Australian Outback

ScienceDaily (Sep. 28, 2009) — Australian researchers have discovered a huge number of new species of invertebrate animals living in underground water, caves and "micro-caverns" amid the harsh conditions of the Australian outback.

A national team of 18 researchers has discovered 850 new species of invertebrates, which include various insects, small crustaceans, spiders, worms and many others.

The team – led by Professor Andy Austin (University of Adelaide), Dr Steve Cooper (South Australian Museum) and Dr Bill Humphreys (Western Australian Museum) – has conducted a comprehensive four-year survey of underground water, caves and micro-caverns across arid and semi-arid Australia.

Some of the 850 new species discovered in underground water, caves and micro-caverns across outback Australia.



"What we've found is that you don't have to go searching in the depths of the ocean to discover new species of invertebrate animals – you just have to look in your own 'back yard'," says Professor Austin from the Australian Center for Evolutionary Biology & Biodiversity at the University of Adelaide.

"Our research has revealed whole communities of invertebrate animals that were previously unknown just a few years ago. What we have discovered is a completely new component to Australia's biodiversity. It is a huge discovery and it is only about one fifth of the number of new species we believe exist underground in the Australian outback."

Only half of the species discovered have so far been named. Generically, the animals found in underground water are known as "stygofauna" and those from caves and micro-caverns are known as "troglofauna".

Professor Austin says the team has a theory as to why so many new species have been hidden away underground and in caves.

"Essentially what we are seeing is the result of past climate change. Central and southern Australia was a much wetter place 15 million years ago when there was a flourishing diversity of invertebrate fauna living on the surface. But the continent became drier, a process that last until about 1-2 million years ago, resulting in our current arid environment. Species took refuge in isolated favorable habitats, such as in underground waters and micro-caverns, where they survived and evolved in isolation from each other.

"Discovery of this 'new' biodiversity, although exciting scientifically, also poses a number of challenges for conservation in that many of these species are found in areas that are potentially impacted by mining and pastoral activities," he says.

The research team has reported its findings at a scientific conference on evolution and biodiversity in Darwin, which celebrates the 200th anniversary of Charles Darwin: http://www.evolutionbiodiversity2009.org. The conference finishes today.

The team's research has been funded by the Australian Research Council (ARC) Environmental Futures Network.

HIV’s Ancestors May Have Plagued First Mammals

ScienceDaily (Sep. 28, 2009) — The retroviruses which gave rise to HIV have been battling it out with mammal immune systems since mammals first evolved around 100 million years ago – about 85 million years earlier than previously thought, scientists now believe.

The remains of an ancient HIV-like virus have been discovered in the genome of the two-toed sloth [Choloepus hoffmanni] by a team led by Oxford University scientists who publish a report of their research in this week’s Science.

'Finding the fossilised remains of such a virus in this sloth is an amazing stroke of luck,’ said Dr Aris Katzourakis from Oxford’s Department of Zoology and the Institute for Emergent Infections, James Martin 21st Century School. ‘Because this sloth is so geographically and genetically isolated its genome gives us a window into the ancient past of mammals, their immune systems, and the types of viruses they had to contend with.’

The remains of an ancient HIV-like virus have been discovered in the genome of the two-toed sloth.


The researchers found evidence of ‘foamy viruses’, a particular kind of retrovirus that resembles the complex lentiviruses, such as HIV and simian retroviruses (SIVs) – as opposed to simple retroviruses that are found throughout the genomic fossil record.

‘In previous work we had found evidence for similar viruses in the genomes of rabbits and lemurs but this new research suggests that the ancestors of complex retroviruses, such as HIV, may have been with us from the very beginnings of mammal evolution,’ said Dr Aris Katzourakis.

Understanding the historical conflict between complex viruses and mammal immune systems could lead to new approaches to combating existing retroviruses, such as HIV. It can also help scientists to decide which viruses that cross species are likely to cause dangerous pandemics – such as swine flu (H1N1) – and which, like bird flu (H5N1) and foamy viruses, cross this species barrier but then never cause pandemics in new mammal populations.

Discovery Brings New Type Of Fast Computers Closer To Reality

ScienceDaily (Sep. 28, 2009) — Physicists at UC San Diego have successfully created speedy integrated circuits with particles called “excitons” that operate at commercially cold temperatures, bringing the possibility of a new type of extremely fast computer based on excitons closer to reality.

Their discovery, detailed this week in the advance online issue of the journal Nature Photonics, follows the team’s demonstration last summer of an integrated circuit—an assembly of transistors that is the building block for all electronic devices—capable of working at 1.5 degrees Kelvin above absolute zero. That temperature, equivalent to minus 457 degrees Fahrenheit, is not only less than the average temperature of deep space, but achievable only in special research laboratories.

Now the scientists report that they have succeeded in building an integrated circuit that operates at 125 degrees Kelvin, a temperature that while still a chilly minus 234 degrees Fahrenheit, can be easily attained commercially with liquid nitrogen, a substance that costs about as much per liter as gasoline.

Alex High and Aaron Hammack adjust the optics in their UCSD lab.



“Our goal is to create efficient devices based on excitons that are operational at room temperature and can replace electronic devices where a high interconnection speed is important,” said Leonid Butov, a professor of physics at UCSD, who headed the research team. “We’re still in an early stage of development. Our team has only recently demonstrated the proof of principle for a transistor based on excitons and research is in progress.”

Excitons are pairs of negatively charged electrons and positively charged “holes” that can be created by light in a semiconductor such as gallium arsenide. When the electron and hole recombine, the exciton decays and releases its energy as a flash of light.

The fact that excitons can be converted into light makes excitonic devices faster and more efficient than conventional electronic devices with optical interfaces, which use electrons for computation and must then convert them to light for use in communications devices.

"Our transistors process signals using excitons, which like electrons can be controlled with electrical voltages, but unlike electrons transform into photons at the output of the circuit,” Butov said. “This direct coupling of excitons to photons allows us to link computation and communication."

Other members of the team involved in the discovery were physicists Gabriele Grosso, Joe Graves, Aaron Hammack and Alex High at UC San Diego, and materials scientists Micah Hanson and Arthur Gossard at UC Santa Barbara.

Their research was supported by the Army Research Office, the Department of Energy and the National Science Foundation.

New Species Discovered In The Greater Mekong At Risk Of Extinction Due To Climate Change

ScienceDaily (Sep. 28, 2009) — A bird-eating fanged frog, a gecko that looks like it’s from another planet, and a bird which would rather walk than fly -- these are among the 163 new species discovered in the Greater Mekong region last year that are now at risk of extinction due to climate change, says a new report launched by WWF ahead of UN climate talks in Bangkok.

During 2008 alone, scientists identified these rare and unique species within the jungles and rivers of the Greater Mekong, including a bird eating fanged frog that lies in streams waiting for prey, one of only four new species of musk shrew to be described in recent times, and a leopard gecko whose “other world” appearance – orange eyes, spindly limbs and technicolour skin – inspired the report’s title Close Encounters.

Such is the immense biodiversity of this region that some discoveries such as the tiger-striped pitviper were made by accident.

“We were engrossed in trying to catch a new species of gecko when my son pointed out that my hand was on a rock mere inches away from the head of a pitviper! We caught the snake and the gecko and they both proved to be new species,” said Dr Lee Grismer of La Sierra University in California.

Leopard gecko. A bird eating fanged frog, a gecko that looks like it's from another planet and a bird which would rather walk than fly, are among the 163 new species discovered in the Greater Mekong region last year that are now at risk of extinction due to climate change,



Close Encounters spotlights species newly identified by science including 100 plants, 28 fish, 18 reptiles, 14 amphibians, 2 mammals and a bird, all discovered in 2008 within the Greater Mekong region of Southeast Asia that spans Cambodia, Laos, Myanmar, Thailand, Vietnam and the south-western Chinese province of Yunnan.

The reluctant flyer, Nonggang babbler, was observed walking longer distances than flying. It would only use its wings when frightened.

“After millennia in hiding these species are now finally in the spotlight, and there are clearly more waiting to be discovered,” said Stuart Chapman, Director of the WWF Greater Mekong Programme.

But no sooner are these new species discovered than their survival is threatened by the devastating impacts of climate change, the report warns.

Recent studies show the climate of the Greater Mekong region is already changing. Models suggest continued warming, increased variability and more frequent and damaging extreme climate events.

Rising seas and saltwater intrusion will cause major coastal impacts especially in the Mekong River delta, which is one of the three most vulnerable deltas on Earth, according to the most recent International Panel on Climate Change report.

“Some species will be able to adapt to climate change, many will not, potentially resulting in massive extinctions,” said Chapman.

“Rare, endangered and endemic species like those newly discovered are especially vulnerable because climate change will further shrink their already restricted habitats,” he said.

Often these newly discovered species are highly dependent on a limited number of species for their survival. If they respond to climate change in a way that disrupts this closely evolved relationship it puts them at greater risk of extinction.

Over the next two weeks, government delegates will meet in Bangkok, Thailand, for the next round of UN climate change talks in the lead up to the Copenhagen Climate Summit this December, where the world is scheduled to agree on a new global climate treaty.

“The treasures of nature are in trouble if governments fail to agree a fair, ambitious and binding treaty that will prevent runaway climate change,” said Kathrin Gutmann, Head of Policy and Advocacy at the WWF Global Climate Initiative.

“Protecting endangered species and vulnerable communities in the Greater Mekong and elsewhere around the world depends on fast progress at the UN talks in Bangkok - a hugely important conference that can lay the groundwork for success at the Copenhagen Climate Summit this December.”

Hot Space Shuttle Images

Researchers at NASA are using a novel thermal-imaging system on board a Navy aircraft to capture images of heat patterns that light up the surface of the space shuttle as it returns through the Earth's atmosphere. The researchers have thus far imaged three shuttle missions and are processing the data to create 3-D surface-temperature maps. The data will enable engineers to design systems to protect future spacecraft from the searing heat--up to 5,500 degrees Celsius--seen during reentry.

"We want to understand peak temperatures, when they happen and where, because that determines the type of material for, and size of, a protection system," says Thomas Horvath, principal investigator of the project, called Hypersonic Thermodynamic InfraRed Measurements (HYTHIRM), at Langley Research Center in Hampton, VA.

Hot body: These thermal images were taken of space shuttle Discovery on September 11. Temperature data was used to make the color images (middle and bottom), blue being the lowest temperatures and red the highest.



NASA has become more concerned with safety and developing tools for inspecting and protecting the shuttle since the 2003 space shuttle Columbia disaster, when damage to the shuttle's wing compromised its heat-resistant shield, causing it to lose structural integrity and break apart during reentry, killing all seven astronauts aboard. Horvath, also a support team member of the Columbia Accident Investigation Board (CAIB), says the HYTHIRM project was developed in response to the Columbia accident.

"I certainly think [the researchers] can learn something about what causes the heating," says Douglas Osheroff, a professor of physics and applied physics at Stanford University, and a member of the CAIB. He adds that the thermal images could also be used as a diagnostic tool to check the integrity of the shuttles tiles during reentry. Currently engineers must manually inspect the tiles upon the shuttle's return.

To image the shuttle, the researchers used a novel optical system called Cast Glance on board a Navy P-3 Orion aircraft. The system is used mostly by the Department of Defense for missile defense missions and so had to be slightly modified for the NASA project. The Navy researchers added a high-resolution, off-the-shelf video camera and adjusted it to filter infrared light. They then calibrated Cast Glance's optical sensors so that, by measuring the infrared radiation from the shuttle, they could calculate the surface temperatures.

The Navy jet flies to within 37 kilometers of the space shuttle when the latter is traveling at speeds of between two and three miles per second, acquiring eight uninterrupted minutes of data: approximately 10,000 to 15,000 images for each mission.

The researchers' focus was the underbelly of the shuttle, which is covered by about 10,000 thermal protective tiles made of a material called reinforced carbon-carbon (RCC). The highest heating areas are near the nose and along the leading edge of each wing. As the shuttle pushes air molecules out of the way, says Deborah Tomak, project manager of HYTHIRM, a boundary layer or protective region, similar to insulation, forms around the shuttle where temperatures are between 1,093 and 1,649 degrees Celsius. Just outside that boundary layer temperatures can rise to a sweltering 5,500 degrees.

Any damage to the tiles, or a protrusion or bump on the underbelly of the shuttle, can cause a break in the boundary layer and allow in extreme heat. Of particular concern are the gap fillers, pieces of ceramic-coated fabric the thickness of a sheet of paper that fit between the tiles to provide cushion, which have been known to protrude. (NASA, however, says the fillers do not impose a safety concern.)

The Langley researchers imaged three shuttle missions: Discovery on March 28 (STS-119); Atlantis on May 24 (STS-125); and Discovery again on September 11 (STS-128). They also conducted two small flight-research experiments. "We added a tiny bump to Discovery's wing, approximately a quarter of an inch, to better understand what is called a boundary layer transition or trip in the flow fields," says Tomak. The researchers also coated two of the tiles with a material that is being developed for the heat shield of the Orion crew exploration vehicle.

The researchers are just beginning to process all the collected data into 3-D surface temperature maps, which they will compare with measurements from thermal sensors on the shuttle's underbelly and with computational fluid dynamic models. Horvath says they will present their results at a conference in January 2010.

However, the researchers have already seen some unexpected results. A small imperfection, possibly as small as a tenth of an inch, on the opposite side from the bump purposely placed on Discovery's wing, created high temperatures in a much larger area than what you normally see, says Horvath.

Osheroff says he is interested to see if the analysis finds different results for different orbiters. For example, Columbia was the first shuttle built and is 20,000 pounds heavier than the other orbiters. "Heating patterns depend on the attitude or orientation of the orbiter during reentry, so it would be beneficial to conduct tests for at least two flights of each orbiter."

There are only six remaining space shuttle flights before the orbiters are scheduled to retire. Horvath says the researchers hope they can continue to image the remaining missions, but final approval is still pending. "Our ability to accurately predict thermal data will have a profound impact on designs for new vehicles," he says.

By Brittany Sauser

Time Lens Speeds Optical Data

Researchers at Cornell University have developed a simple silicon device for speeding up optical data. The device incorporates a silicon chip called a "time lens," lengths of optical fiber, and a laser. It splits up a data stream encoded at 10 gigabits per second, puts it back together, and outputs the same data at 270 gigabits per second. Speeding up optical data transmission usually requires a lot of energy and bulky, expensive optics. The new system is energy efficient and is integrated on a compact silicon chip. It could be used to move vast quantities of data at fast speeds over the Internet or on optical chips inside computers.

Time lens: This silicon chip, called a time lens, is patterned with waveguides that split optical signals and combine them with laser light to speed data rates.



Most of today's telecommunications data is encoded at a rate of 10 gigabits per second. As engineers have tried to expand to greater bandwidths, they've come up against a problem. "As you get to very high data rates, there are no easy ways of encoding the data," says Alexander Gaeta, professor of applied and engineering physics at Cornell University, who developed the silicon device with Michal Lipson, associate professor of electrical and computer engineering. Their work is described online in the journal Nature Photonics.

The new device could also be a critical step in the development of practical optical chips. As electronics speed up, "power consumption is becoming a more constraining issue, especially at the chip level," says Keren Bergman, professor of electrical engineering at Columbia University, who was not involved with the research. "You can't have your laptop run faster without it getting hotter" and consuming more energy, says Bergman. Electronics have an upper limit of about 100 gigahertz. Optical chips could make computers run faster without generating waste heat, but because of the nature of light--photons don't like to interact--it takes a lot of energy to create speedy optical signals.

The new ultrafast modulator gets around this problem because it can compress data encoded with conventional equipment to ultrahigh speeds. The Cornell device is called a "time telescope." While an ordinary lens changes the spatial form of a light wave, a time lens stretches it out or compresses it over time. Brian Kolner, now a professor of applied science and electrical and computer engineering at the University of California, Davis, laid the theoretical groundwork for the time lens in 1988 while working at Hewlett-Packard. He made one in the early 1990s, but it required an expensive crystal modulator that took a lot of energy. The Cornell work, Kolner says, is "a sensible engineering step forward to reduce the proofs of principle to a useful practice."

Here's how the Cornell system works. First, a signal is encoded on laser light using a conventional modulator. The light signal is then coupled into the Cornell chip through an optical-fiber coil, which carries it onto a nanoscale-patterned silicon waveguide. Just as a guitar chord is made up of notes from different strings, the signal is made up of different frequencies of light. While on the chip, the signal interacts with light from a laser, causing it to split into these component frequencies. The light travels through another length of cable onto another nanoscale-patterned silicon waveguide, where it interacts with light from the same laser. In the process, the signal is put back together, but with its phase altered. It then leaves the chip by means of another length of optical fiber, at a rate of 270 gigabits per second.

The physics are complex, but the net effect, says Bergman, is to "take a stream of bits that are kind of slow and make them go much faster." The time telescope transmits more data in less time, and does so in an energy-efficient manner, because the only power required is that needed to run the laser.

The Cornell device is one of a series of recent breakthroughs in silicon photonics. "Silicon is this amazing electronic material, and for a long time it was viewed as being a so-so optical material," says Gaeta. Over the past five years, researchers have been overturning this notion. In 2005, researchers at Intel made the first silicon laser; subsequently, other optical components, including modulators--devices for encoding information on light waves--have been made from the material. "People keep saying you have to replace silicon to do very high-speed processing, but silicon may be the way to go," says Gaeta.

Sticking with silicon has two advantages. First, manufacturers already have the infrastructure for making devices out of silicon. "You can leverage all the technologies that have been developed for electronics to make optical devices," says Gaeta. And if electronics and optics can be made out of the same material, it could be much easier to integrate them on the same chip and have each do what it does best: processing in the case of electronics, ultrafast data transmission in the case of optics.

By Katherine Bourzac

A Simpler, Gentler Robotic Grip

Industrial robots have been helping in the factories for a while, but most robots need a complex hand and powerful software to grasp ordinary objects without damaging them.

Researchers from Harvard and Yale Universities have developed a simple, soft robotic hand that can grab a range of objects delicately, and which automatically adjusts its fingers to get a good grip. The new hand could also potentially be useful as a prosthetic arm.

Soft touch: This four-fingered robotic hand contains sensors that help it pick up a variety of objects.


"When you start to bring robots into human environments, all of a sudden there's a big advantage for being compliant," says Charlie Kemp, an assistant professor at Georgia Tech who designs home-assist robots. "You can't always know what is where. You don't just want to push through the world and break something. You want to have the mechanics comply."

Other researchers have developed soft robotic hands, such as the Sensopac developed at Intel or Obrero created at MIT, but these are generally loaded with multiple types of complex sensors and motors, and require powerful algorithms to account for every joint and finger movement. In contrast, the new robotic hand has just a few sensors and a single motor, but can pick up a variety of objects with the flexibility of a human hand.

Aaron Dollar, an assistant professor at Yale University who led the work, notes that when reaching for an object, people do not normally use a rigid grasp, but keep their fingers relaxed, so as to avoid knocking the object over. Making the robotic hand flexible allows it to pick up objects even with minor calculation errors. Embedded sensors also allow the new hand to feel an object and adjust its grip. In much the same way, says Dollar, if you reach for a coffee mug with your eyes closed, you will feel around for the best grip before picking it up.

"So basically we can start off without knowing anything about the object and reach the hand forward and through this very simple algorithm, to build up a better understanding of where the object is located," says Dollar. The hand has recently been licensed by Barrett Technology, a company based in Cambridge, MA, that sells robotic hands for research.

"They've shown the hand is compliant, so you don't have to have it as precisely positioned as you do other types of hands," says Kemp of Dollar's work. "The robot doesn't have to know exactly where the object is, and in human environments, that's really valuable." He adds that a rigid grasp requires more precise control and execution--and therefore usually more computing power.
Grab it: The robotic hand uses its embedded sensors to sense an object by touch; it then adjusts itself to get a good grip.



Dollar's robotic hand consists of four fingers made out of a flexible, durable polymer. A single motor and spool tugs on the finger joints to open and close the hand. Each soft polymer finger contains embedded sensors, as detailed in the August 2009 online issue of Autonomous Robots. Dollar embedded two piezoelectric sensors--which report physical contact as a voltage response--into each of the four fingers through a molding process called shape deposition manufacturing (SDM). This process allows different materials to be deposited one layer at a time, so that sensors or other items can be set inside the material, which also protects those components.

"Traditional robot hand designs can be very complicated and comprise tens or hundreds of tiny parts that need to be painstakingly assembled," says Andrew Ng, a 2008 TR35 winner and associate professor at Stanford University who works on household robots. "Dollar's work gives robot designers a new and exciting way to build robot hands." He adds that he is planning to apply the manufacturing technique to his own work.

"[Dollar's] work is pushing forward on how we can have intelligent mechanics with low-level sensing and control," says Kemp. "It will make things work better, without having to have a lot of sensing and computation. That's exactly the type of thing we want right now, because we want robots in human environments."

The system currently employs only one type of grasp--the "power" grip, which is useful for picking up some objects, such as a soda can, a ball, or a hammer. Next, Dollar hopes to add a "precision" grip, to enable the hand to pick up smaller objects, such as a pen.

Dollar's former colleague Robert Howe will collaborate with Peter Allen, a professor at Columbia University, who has devised software to simplify robotic grasping, to improve the functionality of the hand. Dollar hopes the hand will eventually be used not just in robotics but also for prosthetics. Using "a stiff robotic hand to shake someone's hand is a lot less human-like," he says.

By Kristina Grifantini

Carbon Nanostructure Research May Lead To Revolutionary New Devices

ScienceDaily (Sep. 28, 2009) — Dr. Jiwoong Park of Cornell University, who receives funding for basic research from the Air Force Office of Scientific Research (AFOSR), is investigating carbon nanostructures that may some day be used in electronic, thermal, mechanical and sensing devices for the Air Force.

"Devices that are required in many of the Air Force missions are somewhat different from commercial ones in the sense that they are often exposed to harsh environments while maintaining their maximum performance," Park said. "Carbon-based nanostructures, including carbon nanotubes and graphenes (thin layers of graphite) present many exciting properties that may lead to new device structures."

Park's team of researchers is examining single molecules, nanocrystals, nanowires, carbon nanotubes and their arrays in an effort to find a "bridging" material that has a stable structure for making molecular-level bonds. In addition, they are seeking an effective tool for resolving functional and structural challenges. If successful, they will be able to apply the research to future technological advances.

Research into carbon nanostructures may some day be used in electronic, thermal, mechanical and sensing devices for the Air Force.



Park's research may contribute to the discovery of new electronic and optical devices that will revolutionize electrical engineering and bioengineering as well as physical and materials science.

As a result of Park's highly innovative work, the U.S. government has selected him to be a 2008 PECASE (Presidential Early Career Award in Science and Engineering) Award winner. The prestigious and much sought after award is the highest honor the government presents to promising scientists and engineers at the beginning of their careers. Each award winner receives a citation, a plaque, and up to $1 million in funding from the nominating agency (AFOSR).

"I fully expect that over the five-year period of the PECASE award, Professor Park will have established himself as a world leader in carbon nanotube and graphene research," said Dr. Harold Weinstock, the AFOSR program manager responsible for nominating Park.

Two Proteins Enable Skin Cells To Regenerate

ScienceDaily (Sep. 28, 2009) — Never mind facial masks and exfoliating scrubs, skin takes care of itself. Stem cells located within the skin actively generate differentiating cells that can ultimately form either the body surface or the hairs that emanate from it. In addition, these stem cells are able to replenish themselves, continually rejuvenating skin and hair. Now, researchers at Rockefeller University have identified two proteins that enable these skin stem cells to undertake this continuous process of self-renewal.

The work, published in Nature Genetics, brings new details to the understanding of how stem cells maintain — and lose — their status as stem cells and are able to specialize into various types of cells. It also further dissects a ubiquitous Rube Goldberg-like pathway whose molecular gears and levers play an important role in activating stem cells to divide and transform into tissue-making cells.

Lead researcher Elaine Fuchs, head of the Laboratory of Mammalian Cell Biology and Development, and first author Hoang Nguyen, a former postdoc in the lab, worked with mice engineered to lack the proteins TCF3 and TCF4, which reside in the nucleus of skin stem cells, where they bind to DNA to turn genes off that would otherwise cause the stem cells to differentiate. They found that without TCF3 and TCF4, all of the layers of the mice’s skin still develop properly, but they cannot be maintained.

Skin deep. In a skin grafting experiment, skin with TCF3 and TCF4 (top), unlike skin without the two proteins (bottom), can activate epidermal stem cells to replenish skin cells on the surface that have died and flaked off.



“The epidermal stem cells — one of the types of stem cells in the skin — lose their capacity to self-renew and replace skin cells that have died,” says Nguyen, who is now an assistant

professor at Baylor College of Medicine in Houston, Texas. “We show that the epidermis cannot be maintained long-term without these two proteins. And that’s what we see in the Petri dish as well as our skin-grafting experiments.”

The TCF proteins (there is a family of four) are found in many stem cells of the body. Their ability to turn genes on depends on signals they receive from their molecular environment that result in the stabilization of a partner molecule called β-catenin.

Fuchs, who is a Howard Hughes Medical Institute investigator and Rebecca C. Lancefield Professor at Rockefeller, and Nguyen have now learned that TCF3 and TCF4 can also work in skin stem cells by keeping off genes when nuclear β-catenin is not around. “In the hair follicles, TCF3 and TCF4 are required to maintain the stem cells as stem cells when β-catenin is not around, but when β-catenin is there, hair growth is activated,” says Fuchs. “In the skin epidermis, TCF3 and TCF4 apparently maintain the epidermal stem cells all by themselves, without β-catenin.”

“If the TCF proteins always act through β-catenin in the skin, then you would always see the same failures whenever β-catenin or TCF3 and TCF4 are missing,” says Nguyen. “But you don’t. The epidermis seems to rely more on TCF3 and TCF4 while hair follicles require TCF3 and TCF4 and β-catenin. This means that there is an arm of this pathway that has never been explored.”

Although the finding is new for mammalian stem cells, it has parallels in other organisms, including worms. “TCFs have ancient origins and the worm field has long believed that TCFs have functions that are not dependent upon β-catenin,” says Fuchs. The parallels between worms and mammalian skin opens new avenues of research into regenerative therapies that could illuminate how to trigger skin growth for burn victims or hair growth for those suffering from hair loss.

“Skin stem cells hold great promise for regenerative medicine,” says Fuchs. “And the more we know about what makes a stem cell a stem cell and the different cocktail of molecules that give these cells their properties, the more sophisticated we can be about understanding their basic biology and using them for the benefit of society.”

Secrets Of The Sandcastle Worm Could Yield A Powerful Medical Adhesive

ScienceDaily (Sep. 27, 2009) — Scientists have copied the natural glue secreted by a tiny sea creature called the sandcastle worm in an effort to develop a long-sought medical adhesive needed to repair bones shattered in battlefield injuries, car crashes and other accidents. They reported on the adhesive here today at the 238th National Meeting of the American Chemical Society (ACS).

"This synthetic glue is based on complex coacervates, an ideal but so far unexploited platform for making injectable adhesives," says Russell Stewart, Ph.D. "The idea of using natural adhesives in medicine is an old one dating back to the first investigations of mussel adhesives in the 1980s. Yet almost 30 years later there are no adhesives based on natural adhesives used in the clinic."

The traditional method of repairing shattered bones is to use mechanical connectors like nails, pins and metal screws for support until they can bear weight. But achieving and maintaining alignment of small bone fragments using screws and wires is challenging, Stewart said. For precise reconstruction of small bones, health officials have acknowledged that a biocompatible, biodegradable adhesive could be valuable because it would reduce metal hardware in the body while maintaining proper alignment of fractures.

The sandcastle worm makes a protective home out of beads of zirconium oxide in a lab. At the University of Utah, scientists have created a synthetic version of this glue for possible use in repairing fractured bones.




Stewart and colleagues duplicated the glue that sandcastle worms (Phragmatopoma californica) use while building their homes in intertidal surf by sticking together bits of sand and broken sea shells. The inch-long marine worm had to overcome several adhesive challenges in order to glue together its underwater house, and its ingenuity has served as a recipe for Stewart's research team in developing the synthetic adhesive.

Stewart's challenge was to devise a water-based adhesive that remained insoluble in wet environments and was able to bond to wet objects. The team also concentrated on key details of the natural adhesive solidification process — a poorly timed hardening of the glue would make it useless, Stewart said. They learned the natural glue sets in response to changes in pH, a mechanism that was copied into the synthetic glue.

The new glue, says Stewart, a bioengineer at the University of Utah in Salt Lake City, has passed toxicity studies in cell culture. It is at least as strong as Super Glue and is twice as strong as the natural adhesive it mimics, he notes.

"We recognized that the mechanism used by the sandcastle worm is really a perfect vehicle for producing an underwater adhesive," Stewart said. "This glue, just like the worm's glue, is a fluid material that, although it doesn't mix with water, is water soluble."

Stewart has begun pilot studies focused on delivering bioactive molecules in the adhesive that could allow it to fix bone fragments and deliver medicines to the fracture site, such as antibiotics, pain relievers or compounds that might accelerate healing.

"We are very optimistic about this synthetic glue," he said. "Biocompatibility is one of the major challenges of creating an adhesive like this. Anytime you put something synthetic into the body, there's a chance the body will respond to it and damage the surrounding tissue. That's something we will monitor, but we've seen no indication right now that it will be a problem."

Woody Plants Adapted To Past Climate Change More Slowly Than Herbs

ScienceDaily (Sep. 27, 2009) — Can we predict which species will be most vulnerable to climate change by studying how they responded in the past? A new study of flowering plants provides a clue. An analysis of more than 5000 plant species reveals that woody plants — such as trees and shrubs — adapted to past climate change much more slowly than herbaceous plants did. If the past is any indicator of the future, woody plants may have a harder time than other plants keeping pace with global warming, researchers say.

In a new study, biologists at the National Evolutionary Synthesis Center and Yale University teamed up to find out how flowering plants adapted to new climates over the course of their evolution. By integrating previously published genealogies for several plant groups with temperature and rainfall data for each species, they were able to measure how fast each lineage filled new climate niches over time.

When they compared woody and herbaceous groups, they found that woody plants adapted to new climates 2 to 10 times slower than herbs. "Woody plants eventually evolved to occupy about the same range of climates that herbaceous plants did, but woody plants took a lot longer to get there," said lead author Stephen Smith, a postdoctoral researcher at the National Evolutionary Synthesis Center in Durham, NC.

Santa Elena Cloud Forest in Costa Rica. If evolutionary history repeats itself, woody plants may have a harder time than herbaceous plants keeping pace with global warming.


The researchers trace the disparity to differences in generation time between the two groups. Longer-lived plants like trees and shrubs typically take longer to reach reproductive age than fast-growing herbaceous plants, they explained. "Some woody plants take many years to produce their first flower, whereas for herbs it could take just a couple months," said co-author Jeremy Beaulieu, a graduate student at Yale University.

Because woody plants have longer reproductive cycles, they also tend to accumulate genetic changes at slower rates, prior research shows. "If genetic mutations build up every generation, then in 1000 years you would expect plants with longer generation times to accumulate fewer mutations per unit time," said Smith. This could explain why woody plants were slower to adapt to new environments. If genetic mutations provide the raw material for evolution, then woody plants simply didn't accumulate mutations fast enough to keep up. "If woody and herbaceous plants were running a race, the herbs would be the hares and the woody plants would be the tortoises," said Beaulieu.

By understanding how plants responded to climate change in the past, scientists may be better able to predict which groups will be hardest hit by global warming in the future. Unlike the tortoise and the hare, however, in this case slow and steady may not win the race. "Woody groups are obviously at a disadvantage as the climate changes," Beaulieu explained.

Does this mean that ecosystems dominated by trees — such as rainforests — will be more likely to disappear? Possibly. "If we look to the past for our clues, chances are trees will continue to respond much slower than herbs — as much as 10 times slower," Smith said. "But if the rate of climate change is 100 times faster, then they could all be in trouble. The kind of change we're experiencing now is so unprecedented," he added. While this study focused on long-term change over the last 100 million years, most climate models predict significant warming in the next century, the researchers explained. "That time frame may be too quick for any plant," Beaulieu said.

The National Evolutionary Synthesis Center (NESCent) is an NSF-funded collaborative research center operated by Duke University, the University of North Carolina at Chapel Hill, and North Carolina State University.

The team's findings will be published online in the Sept. 23 issue of Proceedings of the Royal Society B.

Engineers Track Bacteria's Kayak Paddle-like Motion For First Time

ScienceDaily (Sep. 26, 2009) — Yale engineers have for the first time observed and tracked E. coli bacteria moving in a liquid medium with a motion similar to that of a kayak paddle.

Their findings, which appear online September 29 in the journal Physical Review Letters, will help lead to a better understanding of how bacteria move from place to place and, potentially, how to keep them from spreading.

Scientists have long theorized that the cigar-shaped cell bodies of E. coli and other microorganisms would follow periodic orbits that resemble the motion of a kayak paddle as they drift downstream in a current. Until now, no one had managed to directly observe or track those movements.

The team took sequential images of the E. coli bacteria to track their movements, which resemble the motion of a kayak paddle, through a liquid medium.


Hur Koser, associate professor at Yale's School of Engineering & Applied Science, previously discovered that hydrodynamic interactions between the bacteria and the current align the bacteria in a way that allows them to swim upstream. "They find the most efficient route to migrate upstream, and we ultimately want to understand the mechanism that allows them to do that," Koser said.

In the new study, Koser, along with postdoctoral associate and lead author of the paper, Tolga Kaya, devised a method to see this motion in progress. They used advanced computer and imaging technology, along with sophisticated new algorithms, that allowed them to take millions of high-resolution images of tens of thousands of individual, non-flagellated E. coli drifting in a water and glycerin solution, which amplified the bacteria's paddle-like movements.

The team characterized the bacteria's motion as a function of both their length and distance from the surface. The team found that the longer and closer to the surface they were, the slower the E. coli "paddled."

It took the engineers months to perfect the intricate camera and computer system that allowed them to take 60 to 100 sequential images per second, then automatically and efficiently analyze the huge amount of resulting data.

E. coli and other bacteria can colonize wherever there is water and sufficient nutrients, including the human digestive tract. They encounter currents in many settings, from riverbeds to home plumbing to irrigation systems for large-scale agriculture.

"Understanding the physics of bacterial movement could potentially lead to breakthroughs in the prevention of bacterial migration and sickness," Koser said. "This might be possible through mechanical means that make it more difficult for bacteria to swim upstream and contaminate water supplies, without resorting to antibiotics or other chemicals."

'Green' Roofs Could Help Put Lid On Global Warming

ScienceDaily (Sep. 26, 2009) — "Green" roofs, those increasingly popular urban rooftops covered with plants, could help fight global warming, scientists in Michigan are reporting. The scientists found that replacing traditional roofing materials in an urban area the size of Detroit, with a population of about one-million, with green would be equivalent to eliminating a year's worth of carbon dioxide emitted by 10,000 mid-sized SUVs and trucks.

Their study, the first of its kind to examine the ability of green roofs to sequester carbon which may impact climate change, is scheduled for the Oct. 1 issue of ACS' Environmental Science & Technology, a semi-monthly journal.

"Green" roofs, such as the one above, could fight climate change, scientists report.



Kristin Getter and colleagues point out in the new study that green roofs are multi-functional. They reduce heating and air conditioning costs, for instance, and retain and detain stormwater. Researchers knew that green roofs also absorb carbon dioxide, a major greenhouse gas that contributes to global warming, but nobody had measured the impact until now.

The scientists measured carbon levels in plant and soil samples collected from 13 green roofs in Michigan and Maryland over a two-year period. They found that green roofing an urban area of about one million people would capture more than 55,000 tons of carbon, the scientists say. That's an amount "similar to removing more than 10,000 mid-sized SUV or trucks off the road a year," the article notes.

Superheavy Element 114 Confirmed: A Stepping Stone To The 'Island Of Stability'

ScienceDaily (Sep. 25, 2009) — Scientists at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory have been able to confirm the production of the superheavy element 114, ten years after a group in Russia, at the Joint Institute for Nuclear Research in Dubna, first claimed to have made it. The search for 114 has long been a key part of the quest for nuclear science’s hoped-for Island of Stability.

Heino Nitsche, head of the Heavy Element Nuclear and Radiochemistry Group in Berkeley Lab’s Nuclear Science Division (NSD) and a professor of chemistry at the University of California at Berkeley, and Ken Gregorich, a senior staff scientist in NSD, led the team that independently confirmed the production of the new element, which was first published by the Dubna Gas Filled Recoil Separator group.

Using an instrument called the Berkeley Gas-filled Separator (BGS) at Berkeley Lab’s 88-Inch Cyclotron, the researchers were able to confirm the creation of two individual nuclei of element 114, each a separate isotope having 114 protons but different numbers of neutrons, and each decaying by a separate pathway.

Members of the group that confirmed the production of element 114 in front of the Berkeley Gas-filled Separator at the 88-Inch Cyclotron, from left: Jan Dvorak, Zuzana Dvorakova, Paul Ellison, Irena Dragojevic, Heino Nitsche, Mitch Andre Garcia, and Ken Gregorich. Not pictured is Liv Stavestra.




“By verifying the production of element 114, we have removed any doubts about the validity of the Dubna group’s claims,” says Nitsche. “This proves that the most interesting superheavy elements can in fact be made in the laboratory.”

Verification of element 114 is reported in Physical Review Letters. In addition to Nitsche and Gregorich, the Berkeley Lab team included Liv Stavestra, now at the Institute of Energy Technology in Kjeller, Norway; Berkeley Lab postdoctoral fellow Jan Dvořák; and UC graduate students Mitch Andrē Garcia, Irena Dragojević, and Paul Ellison, with laboratory support from UC Berkeley postdoctoral fellow Zuzana Dvořáková.

The realm of the superheavy

Elements heavier than uranium, element 92 – the atomic number refers to the number of protons in the nucleus – are radioactive and decay in a time shorter than the age of Earth; thus they are not found in nature (although traces of transient neptunium and plutonium can sometimes be found in uranium ore). Elements up to 111 and the recently confirmed 112 have been made artificially – those with lower atomic numbers in nuclear reactors and nuclear explosions, the higher ones in accelerators – and typically decay very rapidly, within a few seconds or fractions of a second.

Beginning in the late 1950s, scientists including Gertrude Scharff-Goldhaber at Brookhaven and theorist Wladyslaw Swiatecki, who had recently moved to Berkeley and is a retired member of Berkeley Lab’s NSD, calculated that superheavy elements with certain combinations of protons and neutrons arranged in shells in the nucleus would be relatively stable, eventually reaching an “Island of Stability” where their lifetimes could be measured in minutes or days – or even, some optimists think, in millions of years. Early models suggested that an element with 114 protons and 184 neutrons might be such a stable element. Longtime Berkeley Lab nuclear chemist Glenn Seaborg, then Chairman of the Atomic Energy Commission, encouraged searches for superheavy elements with the necessary “magic numbers” of nucleons.

“People have been dreaming of superheavy elements since the 1960s,” says Gregorich. “But it’s unusual for important results like the Dubna group’s claim to have produced 114 to go unconfirmed for so long. Scientists were beginning to wonder if superheavy elements were real.”

To create a superheavy nucleus requires shooting one kind of atom at a target made of another kind; the total protons in both projectile and target nuclei must at least equal that of the quarry. Confirming the Dubna results meant aiming a beam of 48Ca ions – calcium whose nuclei have 20 protons and 28 neutrons – at a target containing 242Pu, the plutonium isotope with 94 protons and 148 neutrons. The 88-Inch Cyclotron’s versatile Advanced Electron Cyclotron Resonance ion source readily created a beam of highly charged calcium ions, atoms lacking 11 electrons, which the 88-Inch Cyclotron then accelerated to the desired energy.

Four plutonium oxide target segments were mounted on a wheel 9.5 centimeters (about 4 inches) in diameter, which spun 12 to 14 times a second to dissipate heat under the bombardment of the cyclotron beam.

“Plutonium is notoriously difficult to manage,” says Nitsche, “and every group makes their targets differently, but long experience has given us at Berkeley a thorough understanding of the process.” (Experience is especially long at Berkeley Lab and UC Berkeley – not least because Glenn Seaborg discovered plutonium here early in 1941.)

When projectile and target nuclei interact in the target, many different kinds of nuclear reaction products fly out the back. Because nuclei of superheavy elements are rare and short-lived, both the Dubna group and the Berkeley group use gas-filled separators, in which dilute gas and tuned magnetic fields sweep the copious debris of beam-target collisions out of the way, ideally leaving only compound nuclei with the desired mass to reach the detector. The Berkeley Gas-filled Separator had to be modified for radioactive containment before radioactive targets could be used.

In sum, says Gregorich, “The high beam intensities from the 88-Inch Cyclotron, together with the efficient background suppression of the BGS, allow us to look for nuclear reaction products with very small cross-sections – that is, very low probabilities of being produced. In the case of element 114, that turned out to be just two nuclei in eight days of running the experiment almost continuously.”

Tracking the isotopes of 114

The researchers identified the two isotopes as 286114 (114 protons and 172 neutrons) and 287114 (114 protons and 173 neutrons). The former, 286114, decayed in about a tenth of a second by emitting an alpha particle (2 protons and 2 neutrons, a helium nucleus) – thus becoming a “daughter” nucleus of element 112 – which subsequently spontaneously fissioned into smaller nuclei. The latter, 287114, decayed in about half a second by emitting an alpha particle to form 112, which also then emitted an alpha particle to form daughter element 110, before spontaneously fissioning into smaller nuclei.

The Berkeley Group’s success in finding these two 114 nuclei and tracking their decay depended on sophisticated methods of detection, data collection, and concurrent data analysis. After passing through the BGS, the candidate nucleus enters a detector chamber. If a candidate element 114 atom is detected, and is subsequently seen to decay by alpha-particle emission, the cyclotron beam instantly shuts off so further decay events can be recorded without background interference.

In addition to such automatic methods of enhancing data collection, the data was analyzed by completely independent software programs, one written by Gregorich and refined by team member Liv Stavsetra, another written by team member Jan Dvořák.

“One surprise was that the 114 nuclei had much smaller cross sections – were much less likely to form – than the Dubna group reported,” Nitsche says. “We expected to get about six in our eight-day experiment but only got two. Nevertheless, the decay modes, lifetimes, and energies were all consistent with the Dubna reports and amply confirm their achievement.”

Says Gregorich, “Based on the ideas of the 1960s, we thought when we got to element 114 we would have reached the Island of Stability. More recent theories suggest enhanced stability at other proton numbers, perhaps 120, perhaps 126. The work we’re doing now will help us decide which theories are correct and how we should modify our models.”

Nitsche adds, “During the last 20 years, many relatively stable isotopes have been discovered that lie between the known heavy element isotopes and the Island of Stability – essentially they can be considered as ‘stepping stones’ to this island. The question is, how far does the Island extend – from 114 to perhaps 120 or 126? And how high does it rise out the Sea of Instability.”

The accumulated expertise in Berkeley Lab’s Nuclear Science Division; the recently upgraded Berkeley Gas-filled Separator that can use radioactive targets; the more powerful and versatile VENUS ion source that will soon come online under the direction of operations program head Daniela Leitner – all add up to Berkeley Lab’s 88-Inch Cyclotron remaining highly competitive in the ongoing search for a stable island in the sea of nuclear instability.

This work was supported by the U.S. Department of Energy’s Office of Science.

Scientists See Water Ice In Fresh Meteorite Craters On Mars

ScienceDaily (Sep. 25, 2009) — NASA's Mars Reconnaissance Orbiter has revealed frozen water hiding just below the surface of mid-latitude Mars. The spacecraft's observations were obtained from orbit after meteorites excavated fresh craters on the Red Planet.

Scientists controlling instruments on the orbiter found bright ice exposed at five Martian sites with new craters that range in depth from approximately half a meter to 2.5 meters (1.5 feet to 8 feet). The craters did not exist in earlier images of the same sites. Some of the craters show a thin layer of bright ice atop darker underlying material. The bright patches darkened in the weeks following initial observations, as the freshly exposed ice vaporized into the thin Martian atmosphere. One of the new craters had a bright patch of material large enough for one of the orbiter's instruments to confirm it is water-ice.

The finds indicate water-ice occurs beneath Mars' surface halfway between the north pole and the equator, a lower latitude than expected in the Martian climate.

Earlier and later HiRISE images of a fresh meteorite crater 12 meters, or 40 feet, across located within Arcadia Planitia on Mars show how water ice excavated at the crater faded with time. The images, each 35 meters, or 115 feet across, were taken in November 2008 and January 2009.



"This ice is a relic of a more humid climate from perhaps just several thousand years ago," said Shane Byrne of the University of Arizona, Tucson.

Byrne is a member of the team operating the orbiter's High Resolution Imaging Science Experiment, or HiRISE camera, which captured the unprecedented images. Byrne and 17 co-authors report the findings in the Sept. 25 edition of the journal Science.

"We now know we can use new impact sites as probes to look for ice in the shallow subsurface," said Megan Kennedy of Malin Space Science Systems in San Diego, a co-author of the paper and member of the team operating the orbiter's Context Camera.

During a typical week, the Context Camera returns more than 200 images of Mars that cover a total area greater than California. The camera team examines each image, sometimes finding dark spots that fresh, small craters make in terrain covered with dust. Checking earlier photos of the same areas can confirm a feature is new. The team has found more than 100 fresh impact sites, mostly closer to the equator than the ones that revealed ice.

An image from the camera on Aug. 10, 2008, showed apparent cratering that occurred after an image of the same ground was taken 67 days earlier. The opportunity to study such a fresh impact site prompted a look by the orbiter's higher resolution camera on Sept. 12, 2009, confirming a cluster of small craters.

"Something unusual jumped out," Byrne said. "We observed bright material at the bottoms of the craters with a very distinct color. It looked a lot like ice."

The bright material at that site did not cover enough area for a spectrometer instrument on the orbiter to determine its composition. However, a Sept. 18, 2008, image of a different mid-latitude site showed a crater that had not existed eight months earlier. This crater had a larger area of bright material.

"We were excited about it, so we did a quick-turnaround observation," said co-author Kim Seelos of Johns Hopkins University Applied Physics Laboratory in Laurel, Md. "Everyone thought it was water-ice, but it was important to get the spectrum for confirmation."

Mars Reconnaissance Orbiter Project Scientist Rich Zurek, of NASA's Jet Propulsion Laboratory, Pasadena, Calif., said, "This mission is designed to facilitate coordination and quick response by the science teams. That makes it possible to detect and understand rapidly changing features."

The ice exposed by fresh impacts suggests that NASA's Viking Lander 2, digging into mid-latitude Mars in 1976, might have struck ice if it had dug 10 centimeters (4 inches) deeper. The Viking 2 mission, which consisted of an orbiter and a lander, launched in September 1975 and became one of the first two space probes to land successfully on the Martian surface. The Viking 1 and 2 landers characterized the structure and composition of the atmosphere and surface. They also conducted on-the-spot biological tests for life on another planet.

Getting A Leg Up On Whale And Dolphin Evolution: New Comprehensive Analysis Sheds Light On The Origin Of Cetaceans

ScienceDaily (Sep. 25, 2009) — When the ancestors of living cetaceans—whales, dolphins and porpoises—first dipped their toes into water, a series of evolutionary changes were sparked that ultimately nestled these swimming mammals into the larger hoofed animal group. But what happened first, a change from a plant-based diet to a carnivorous diet, or the loss of their ability to walk?

A new paper published this week in PLoS ONE resolves this debate using a massive data set of the morphology, behavior, and genetics of living and fossil relatives. Cetacean ancestors probably moved into water before changing their diet (and their teeth) to include carnivory; Indohyus, a 48-million year-old semi-aquatic herbivore, and hippos fall closest to cetaceans when the evolutionary relationships of the larger group are reconstructed.

The Eocene "walking whale"(Ambulocetus natans) is a close relative to the Cetacean.



"If you only had living taxa to figure out relationships within this group of animals, you would miss a large amount of diversity and part of the picture of what is going on," says Michelle Spaulding, lead author of the study and a graduate student affiliated with the American Museum of Natural History. "Indohyus is interesting because this fossil combines an herbivore's dentition with adaptations such as ear bones that are adapted for hearing under water and are traditionally associated with whales only."

The origin of whales, dolphins, and porpoises—with their highly modified legs and lack of hair—has long been a quandary for mammalogists. About 60 years ago, researchers first suggested that cetaceans were related to plant-eating ungulates, specifically to even-toed, artiodactyl mammals like sheep, antelope and pigs. In other words, carnivorous killer whales and fish-eating dolphins were argued to fit close to the herbivorous hoofed animal group. More recent genetic research found that among artiodactyls, hippos are the cetaceans' closest living relatives.

Because no one would ever link hippos and whales based on their appearance, fossil evidence became an important way to determine the precise evolutionary steps that cetacean ancestors took. Traditionally, the origin of whales was linked to the mesonychids, an extinct group of carnivores that had singly-hoofed toes. The recent discovery of Indohyus, a clearly water-adapted herbivore, complicates this picture (as new fossils often do) because of ear bones similar to those of modern cetaceans, which are theorized to help the animal have heard better while under the water.

To tease apart different potential evolutionary histories (whether carnivory or water adaptations occurred first; the mesonychid or Indohyus relatedness ideas), Spaulding and colleagues mapped the evolutionary relationships among more than 80 living and fossil taxa (in other words, species and/or genera). These taxa were scored for 661 morphological and behavioral characters (such as presence of hair or the shape of and ankle bone). Forty-nine new DNA sequences from five nuclear genes were also added to the mix of more than 47,000 characters; both morphological and genetic data build on previous analyses by authors Maureen O'Leary of Stony Brook University and John Gatesy of University of California at Riverside. In addition, Indohyus, carnivores (dogs and cats), and an archaic group of meat-eating mammals called creodonts were included.

The team found that the least complex evolutionary tree places Indohyus and similar fossils close to whales, while mesonychids are more distantly related. Hippos remain the closest living relatives. These results suggest that cetacean ancestors transitioned to water before becoming carnivorous but that the meat-eating diet developed while these ancestors could still walk on land.

"How do you put flesh and movement onto a fossil?" asks author O'Leary. "The earliest stem whale probably ate prey in water while still being able to walk on land. Indohyus has some adaptations for hearing under water but also ate plants, while Ambulocetus (a walking whale that lived about 50 million years ago) seems to have been carnivorous."

"There is deep conflict in the evolutionary tree," says Spaulding. "The backbone of the tree is robust and stable, but you have these fairly large clades that move around relative to this backbone(Indohyus and mesonychids) We need to really re-examine characters carefully and see what suite of traits are the truly derived in different taxa to fully resolve this tree."

This research was funded by separate National Science Foundation grants to all three authors.

Twin Keck Telescopes Probe Dual Dust Disks

ScienceDaily (Sep. 25, 2009) — Astronomers using the twin 10-meter telescopes at the W. M. Keck Observatory in Hawaii have explored one of the most compact dust disks ever resolved around another star. If placed in our own solar system, the disk would span about four times Earth’s distance from the sun, reaching nearly to Jupiter’s orbit. The compact inner disk is accompanied by an outer disk that extends hundreds of times farther.

The centerpiece of the study is the Keck Interferometer Nuller (KIN), a device that combines light captured by both of the giant telescopes in a way that allows researchers to study faint objects otherwise lost in a star’s brilliant glare. "This is the first compact disk detected by the KIN, and a demonstration of its ability to detect dust clouds a hundred times smaller than a conventional telescope can see," said Christopher Stark, an astronomer at NASA’s Goddard Space Flight Center in Greenbelt, Md., who led the research team.

By merging the beams from both telescopes in a particular way, the KIN essentially creates a precise blind spot that blocks unwanted starlight but allows faint adjacent signals – such as the light from dusty disks surrounding the star – to pass through.

This diagram compares 51 Ophiuchi and its dust disks to the sun, planets and zodiacal dust in the solar system. Zones with larger dust grains are red; those with smaller grains are blue. Planet sizes are not to scale.



In April 2007, the team targeted 51 Ophiuchi, a young, hot, B-type star about 410 light-years away in the constellation Ophiuchus. Astronomers suspect the star and its disks represent a rare, nearby example of a young planetary system just entering the last phase of planet formation, although it is not yet known whether planets have formed there.

"Our new observations suggest 51 Ophiuchi is a beautiful protoplanetary system with a cloud of dust from comets and asteroids extremely close to its parent star," said Marc Kuchner, an astronomer at Goddard and a member of the research team.

Planetary systems are surprisingly dusty places. Much of the dust in our solar system forms inward of Jupiter's orbit, as comets crumble near the sun and asteroids of all sizes collide. This dust reflects sunlight and sometimes can be seen as a wedge-shaped sky glow – called the zodiacal light – before sunrise or after sunset.

Dusty disks around other stars that arise through the same processes are called "exozodiacal" clouds. "Our study shows that 51 Ophiuchi’s disk is more than 100,000 times denser than the zodiacal dust in the solar system," explained Stark." This suggests that the system is still relatively young, with many colliding bodies producing vast amounts of dust."

To decipher the structure and make-up of the star’s dust clouds, the team combined KIN observations at multiple wavelengths with previous studies from NASA’s Spitzer Space Telescope and the European Southern Observatory’s Very Large Telescope Interferometer in Chile.

The inner disk extends about 4 Astronomical Units (AU) from the star and rapidly tapers off. (One AU is Earth’s average distance from the sun, or 93 million miles.) The disk’s infrared color indicates that it mainly harbors particles with sizes of 10 micrometers – smaller than a grain of fine sand – and larger.

The outer disk begins roughly where the inner disk ends and reaches about 1,200 AU. Its infrared signature shows that it mainly holds grains just one percent the size of those in the inner disk – similar in size to the particles in smoke. Another difference: The outer disk appears more puffed up, extending farther away from its orbital plane than the inner disk.

"We suspect that the inner disk gives rise to the outer disk," explained Kuchner. As asteroid and comet collisions produce dust, the larger particles naturally spiral toward the star. But pressure from the star’s light pushes smaller particles out of the system. This process, which occurs in our own solar system, likely operates even better around 51 Ophiuchi, a star 260 times more luminous than the sun.