Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........

Innovative technique can spot errors in key technological systems

IODA separated good and bad data from an anemometer: blue=high-quality; red=low-quality

The patented technique, known as the Intelligent Outlier Detection Algorithm, or IODA, is described this month in the Journal of Atmospheric and Oceanic Technology.

IODA offers the potential to alert operators to faulty readings or other problems associated with failing sensors. If sensors malfunction and begin transmitting bad data, computers programmed with the algorithm could identify the problem and isolate that bad data.

IODA was developed by researchers at the National Center for Atmospheric Research (NCAR) and the University of Colorado at Boulder (CU).

The National Science Foundation (NSF), NCAR's sponsor, funded the research. "This technology will have broad applicability in many new areas," says Steve Nelson, NSF program director for NCAR.

The developers of the algorithm say its principles can eventually be used in a vast range of technological settings, including cars and other transportation systems, power plants, satellites and space exploration, and data from radars and other observing instruments.

"This could, at least in theory, enable operators to keep a system performing even while it's failing," says Andrew Weekley, a software engineer at NCAR who led the algorithm development effort. "When a system starts to fail, it's absolutely critical to be able to control it as long as possible. That can make the difference between disaster or not."

IODA is designed to perform quality control on time series data--that is, data collected over time, such as wind speeds over the course of a month.

The algorithm, an expert system that draws on statistics, graph theory, image processing and decision trees, can be applied in cases where the correct assessment of data is critical, the incoming data are too numerous for a human to easily review, or the consequences of a sensor failure would be significant.

At present the algorithm consists of several thousand lines of a technical computing language known as MATLAB. The researchers may expand and translate it into a computer programming language such as C so it can be used for commercial purposes.

Ensuring the quality of incoming time series data is a priority for virtually any organization involved in complex operations. If sensors begin relaying inaccurate information, it can be highly challenging for personnel or automated systems to separate good data from bad, especially in cases involving enormous amounts of information.

Typically, to identify bad data, complex operations may rely on multiple sensors, as well as algorithms that characterize specific relationships among the data being collected, and identify failures when the data unexpectedly change.

A drawback in most of these algorithms, however, is they are designed for a particular type of time series and can fail catastrophically when applied to different types of data, especially in situations where there are numerous and sometimes subtle errors.

IODA, however, compares incoming data to common patterns of failure--an approach that can be applied broadly because it is independent of a specific sensor or measurement.

Weekley and co-authors took a new approach to the problem when they began developing IODA 10 years ago. Whereas existing methods treat the data as a function of time, Weekley conceived of an algorithm that treats the data as an image.

This approach mimics the way a person might look at a plot of data points to spot an inconsistency.

For example, if a person looked at a line drawn between points on a graph that represented morning temperatures rising from 50 to 70 degrees, and then spotted a place where that smooth line was broken, dipping precipitously because of numerous data points down at 10 degrees, the person would immediately suspect there was a bad sensor reading.

In cases where there are thousands or even millions of data points about temperature or other variables, pinpointing the bad ones can be more difficult.

But Weekley thought that a computer could be programmed to recognize common patterns of failure through image processing techniques.

Then, like a person eyeing data, the computer could identify problems with data points such as jumps and intermittency; view patterns in the data; and determine not only whether a particular datum is bad but also characterize how it is inaccurate.

"Our thought was to organize a sequence of data as an image and apply image processing techniques to identify a failure unambiguously," Weekley says. "We thought that, by using image processing, we could teach the system to detect inconsistencies, somewhat like a person would."

The research team came up with ways of arranging data points in a time series into clusters, both in a domain that represents the data points over time and in another domain known as delay space.

Delay space, which offers another way to detect differences in the data, is a technique that pairs a data point in the time series with the previous value.

Using the clusters from both the time domain and delay space, bad data are separated into their own cluster, clearly distinct from the cluster of accurate data. At the same time, IODA can calculate quality scores indicating if each individual data point is good or bad.

"I would say the approach we report in the paper is a radical departure from the usual techniques found in the time series literature," says Kent Goodrich, a CU mathematician and a co-author of the paper.

"The image processing and other techniques are not new, but the use of these images and techniques together in a time series application is new. IODA is able to characterize good and bad points very well in some commonly encountered situations."

When the research team tested IODA, they found it accurately isolated incorrect data in several cases.

For example, they applied the algorithm to wind readings from anemometers in Alaska that contained faulty errors due to a loose nut, which left the anemometers unable to consistently measure gusts in high-wind situations. The algorithm identified the bad readings, separating them into a series of clusters away from the good data.

"This technique has very broad implications," Weekley says. "Virtually all control systems rely on time series data at some level, and the ability to identify suspect data along with the possible failure is very useful in creating systems that are more robust.

"We think it is a powerful methodology that could be applied to almost all sequences of measurements that vary over time."


The iPad: Like an iPhone, Only Bigger

Apple announced its latest creation, the iPad, at a special event in San Francisco, CA, today.

CEO Steve Jobs took the stage to unveil the device, which has been the subject of often dizzying speculation and excitement in recent weeks. "We want to kick off 2010 by introducing a magical and revolutionary product today," Jobs said.

Big idea: Apple CEO Steve Jobs reveals the iPad at an event in California.

The expectation and hope for many has been that Apple will revolutionize both the e-reader and tablet computing markets, just as it did with the cell-phone and PDA markets through the iPhone.

The iPad features a 9.7-inch (25-centimeter) multi-touch, in-plane switching LCD display; it is half an inch (1.3 centimeters) thick and weighs 1.5 pounds (.6 kilograms). The main processor is a one-gigahertz chip made by Apple, and the device is said to come with 10 hours of battery life when in full use.

Along with 802.11n wireless and Bluetooth, the iPad will connect to AT&T's 3G wireless network. But the data plan is a hybrid of what is offered for phones and laptops already. Users will be asked to pay either $14.99 a month for up to 250 megabytes of data, or $29.99 a month for unlimited data.

The device costs between $499 and $829. The cheapest model will come with Wi-Fi only and 16 gigabytes of flash memory; the most expensive version includes 64 gigabytes of memory and 3G access. The device will ship in 60 days.

During the announcement, Jobs was careful to distinguish the iPad from the netbooks that have grown popular as a cheap alternative to laptops for browsing the Internet and simple computing tasks. Championing the design principles for which Apple is famous, he argued that the new device had to be better than a laptop for Web browsing, sending e-mail, viewing photos, reading e-books, and other tasks.

The interface for the device is similar to that of the iPhone: a multi-touch screen and an on-screen keyboard. The iPad is designed to run all iPhone apps "unmodified, right out of the box," according to Scott Forstall, senior vice president of iPhone software. It can run them either in an iPhone-sized window on the screen, or full-screen at lower resolution. Developers can also modify their applications specifically for the iPad, using a new software development kit that Apple made available today. "We think its going to be a whole other gold rush for developers as they build apps for the iPad," Forstall said.

Jobs and others demonstrated numerous applications running on an iPad. These included games, maps, and versions of Apple's iWork suite, showing word processing, spreadsheets, and presentations.

Representatives from the New York Times demonstrated an electronic version of the newspaper created especially for the iPad. Jobs also announced an e-book reader app for iPad called iBooks that will have access to the catalogs of five major book publishers--Penguin, HarperCollins, Simon & Schuster, Macmillan, and Hachette Book Group.

Carl Howe, an analyst focusing on mobile research at the Yankee Group, said that Apple is cleverly building on what it has already established with the iPod and iPhone.

Though Jobs did not focus as much attention on the e-reader potential of the device as expected, Howe believes that aspect could still prove significant. Because the iPad, unlike the Kindle, is designed with a high-resolution screen that can easily handle apps, movies, and music in addition to books, he thinks it will be more attractive to users than more dedicated e-readers. It might also be more attractive to publishers because the system will let them preserve more of their formatting and typography, and possibly allow for advertising.

By Erica Naone

From Technology Review

Skin Cells Turned into Brain Cells

Skin cells called fibroblasts can be transformed into neurons quickly and efficiently with just a few genetic tweaks, according to new research. The surprisingly simple conversion, which doesn't require the cells to be returned to an embryonic state, suggests that differentiated adult cells are much more flexible than previously thought.

If the research, published in the journal Nature yesterday, can be repeated in human cells, it would provide an easier method for generating replacement neurons from individual patients. Brain cells derived from a skin graft would be genetically identical to the patient and therefore remove the risk of immune rejection--such an approach might one day be used to treat Parkinson's or other neurodegenerative diseases.

Cellular transformation: A cocktail of three genes can transform skin cells into neurons (shown here in red).

"It's almost scary to see how flexible these cell fates are," says Marius Wernig, a biologist at the Institute for Stem Cell Biology and Regenerative Medicine at Stanford, who led the research. "You just need a few factors, and within four to five days you see signs of neuronal properties in these cells."

Three years ago, scientists shook up the stem cell field by demonstrating how to revert adult cells back to the embryonic state, using just four genetic factors. Research on these cells, known as induced pluripotent stem cells (iPS cells), has since exploded across the globe. IPS cells can be differentiated into any cell type, and show huge promise for drug screening and tissue replacement therapies. Scientists are now trying to push this newfound cellular flexibility further by converting adult cells directly from one type to another.

In 2008, Doug Melton, Qiao Zhou, and colleagues at Harvard University showed it was possible to convert one type of pancreatic cell into another, a feat that might one day help people with diabetes. The new research demonstrates a more dramatic transformation--converting skin cells into neurons. This is particularly impressive because the lineage of the two types of cells diverges very early in embryonic development. (Previous research has suggested that neurons could be made from muscle and bone marrow cells, but the fate of the cells at the end of the process was murkier.)

To create the powerful molecular cocktail, scientists started with 20 genes known to play a role in neural development and found only in the brain. All of the selected genes were transcription factors, which bind to DNA and regulate expression of other genes. Using viruses to deliver each gene into skin cells growing in a dish, the team discovered that one gene in particular had the power to convert the skin cells into what looked like immature neurons. After testing other genes in combination with the active one, scientists found a combination of three genes that could efficiently and rapidly convert skin cells into neurons.

The resulting cells show all the hallmarks of neurons--they express neuron-specific genes, they have the characteristic branching shape of neurons, and they can form electrically active connections both with each other and with regular neurons collected from the brain. "Many people thought it would be impossible to transform cells in this way," says Zhou. "The fact that you can convert them so rapidly and efficiently is quite surprising."

Wernig's team is now trying to replicate this phenomenon in human cells. "If we can accomplish that, it opens the door to entire uncharted areas," he says. "Then we can derive neurons from a patient's skin cell, which bypasses the complicated iPS cell process." IPS cells can be tricky to grow, and the process takes four to six weeks, he says.

It remains to be seen which approach will work best in different situations. One advantage of iPS cells is that they can produce more of themselves, and can therefore be grown indefinitely and in large quantities, says Sheng Ding, a biologist at the Scripps Research Institute, in La Jolla, CA, who was not involved in the current research.

It's also not yet clear exactly how the remarkable transformation revealed in the latest work happens. Genetically identical cells can have very different identities thanks to epigenetics, which refers to different mechanisms a cell has for packaging its DNA. That packaging regulates which genes are easily accessible and active in the cell, which in turn determines whether it becomes a skin cell, a heart cell, or a brain cell.

Broadly, scientists think that the transcription factors used in various reprogramming recipes alter this DNA packaging. "We need a real epigenetic and molecular understanding of the mechanism in order to manipulate the system more intelligently," says Zhou.

The mechanisms underlying direct reprogramming may prove more complex than in iPS cell reprogramming. Converting adult cells to an embryonic state may simply involve stripping epigenetic markers. "But when directly reprogramming from one somatic cell to another, you cannot randomly remove epigenetic marks," says Zhou. "You have to remove some and add some and keep many intact. Recognizing which to leave alone and which to change is the key."

Before the technology can be tested for human therapies, researchers will likely need to find a combination of chemicals that can achieve the same results as the genes used in the study, because genetically engineered cells may harbor some cancer risk (scientists have already accomplished this with iPS cells). Researchers will also need to show that the cells can function properly when transplanted into the brain--Wernig now plans to test this in mice engineered to have a disease similar to Parkinson's.

The research is also likely to provoke some rethinking of cell fate. "For a long time, epigenetic modifications were thought to be extremely stable," says Wernig. "Before Dolly the sheep or iPS cells, people thought epigenetic modifications were irreversible--that once set during development, they were not changeable. But this is absolutely not true."

By Emily Singer

From Technology Review

A Safer Way to Coat Long-Lasting Solar Cells

A venture spun out of two Quebec universities says it has developed a safer way of adding antireflective coatings to crystalline silicon solar cells that also boosts their lifetime energy yield.

In the solar photovoltaic market, even the smallest improvement in efficiency can have a meaningful impact on manufacturers' bottom line, which is why antireflective coatings are so important. These thin coatings, which cause solar cells to appear blue, maximize how much sunlight is absorbed and reduce surface defects that can lower performance.

Sun screen: A Sixtron technician holds a solar cell layered with the company's silicon carbide anti-reflective coating. The process is safer and can be integrated into existing manufacturing lines.

However, the most popular coating method--the vapor deposition of a silicon nitride film using silane gas--comes with major risks. Silane can ignite when exposed to air; the gas is costly to transport, and silicon cell manufacturers must invest in special storage, ventilation, and other safety measures to prevent accidents.

"The potential for damage is huge," says Ajeet Rohatgi, director of the Photovoltaic Research Center at the Georgia Institute of Technology. Cells coated this way are also affected by a phenomenon called light-induced degradation that occurs once after the first 24 to 48 hours of sunlight exposure. "In a cell with 18 percent efficiency, you will see efficiency drop [almost immediately] to 17.7 or 17.5 percent, and you've lost that for the life of the cell," he says.

Rohatgi and his team of researchers at Georgia Tech have spent the past 18 months testing a new silane-free process for applying antireflective film to solar cells, which was developed by Montreal-based Sixtron Advanced Materials. The coating--a silicon carbide nitride material carrying the trade name Silexium--reduces light-induced degradation by up to 88 percent.

Crystalline silicon wafers, which are usually doped with boron, also contain oxygen. When sunlight first hits a new cell it causes boron and oxygen to combine, resulting in a 3 percent to 5 percent degradation in cell efficiency. The researchers found that when the Silexium film is added, some of the carbon in the coating ends up diffusing into the bulk of the silicon wafer. They believe the carbon competes with the boron to make a bond with oxygen. Because there's less oxygen for the boron to bond with, light-induced degradation is largely avoided.

Abasifreke Ebong, assistant director of Georgia Tech's Photovoltaic Research Center, says to confirm that this is happening, the next step is to study the oxygen content of the solar wafers after they're removed from the firing furnace. If the oxygen is lower, the theory holds. "That's the data we're waiting for," he says.

According to Mike Davies, senior vice president at Sixtron, every 0.1 percentage of net efficiency spared from light-induced degradation results, on average, in a $600,000 gain in profit margin for each 60-megawatt cell production line.

Sixtron's system eliminates the silane gas hazard, relying instead on a proprietary solid polymer material that contains silicon and carbon. Using heat and pressure, the solid is converted to a less dangerous methyl silane gas during the cell-coating process. The solid-to-gas conversion takes place inside the company's gas-handling cabinet system, called SunBox, which has been designed to plug directly into industry-standard systems that exist on most cell-production lines.

Joshua Pearce, a professor of advanced materials at Queen's University in Kingston, Ontario, says Sixtron may be overstating the risks of using silane in a photovoltaic cell plant. "There are standard safety procedures that make working in a photovoltaic factory very safe," he says. Still, he adds, "anything to drop the cost of photovoltaic, even if by a small amount, is a great contribution."

Sixtron says it is already working with the top three providers of photovoltaic cell manufacturing equipment in Germany, and has interest from several others. The company plans to rent out the system at a cost roughly the same as using a silane-based system. Importantly, it avoids the need to use other light-induced degradation reduction strategies, based on alternative manufacturing methods or the use of higher-cost wafers doped with gallium.

By Tyler Hamilton

From Technology Review

Using Light to Disinfect Water

Getting access to clean drinking water is an ongoing problem for people in developing countries. And even cities that have good water-treatment systems are looking for better ways to deliver safer, cleaner water. Now an international research team has developed a photocatalyst that promises quick, effective water disinfection using sunlight or artificial light. What's more, the photocatalyst keeps working after the light is turned off, disinfecting water even in the dark.

Coming clean: A micrograph shows the surface of a light-activated catalyst that disinfects water even in the dark. Palladium nanoparticles on the surface of a nitrogen-doped titanium oxide help to extend the catalyst's disinfection power up to 24 hours.

It has long been known that irradiating water with high-intensity ultraviolet light kills bacteria. Some water filters made for campers and hikers, for example, use this technology. Researchers have been working to enhance the method's effectiveness by adding a photocatalyst that gets activated by UV light and generates reactive chemical compounds that break down microbes into carbon dioxide and water.

The new photocatalyst improves on that by using visible, rather than UV, light. Synthesized by Jian-Ku Shang, professor of materials science and engineering at the University of Illinois, Urbana-Champaign, and his colleagues, the photocatalyst works with light in the visible spectrum--wavelengths between 400 and 550 nanometers. It consists of fibers of titanium oxide--a common material used as a white pigment--doped with nitrogen to make it absorb visible light. Alone, the nitrogen-doped titanium oxide kills bacteria, though not efficiently. The researchers added nanoparticles of palladium to the surface of the fibers, greatly increasing the efficiency of the disinfection. He and his colleagues at the Shenyang National Laboratory for Materials Sciences in China published their work online in the Journal of Materials Chemistry.

"It would be very nice to shift activity of the traditional [photocatalyst] materials, which were only activated by ultraviolet radiation, to visible," says Alexander Orlov, assistant professor of materials science and engineering at Stony Brook University in New York. "If you look at the solar spectra, it contains only 5 percent ultraviolet and around 46 of visible." Such photocatalysts would allow solar energy to be used more efficiently as well as used indoors, since fluorescent lighting contains very little ultraviolet light.

Shang and his colleagues tested the photocatalyst by placing it in a solution containing a high concentration of E. coli bacteria and then shining a halogen desk lamp on the solution for varying lengths of time. After an hour, the concentration of bacteria dropped from 10 million cells per liter to just one cell per 10,000 liters.

The researchers also tested the photocatalyst's ability to disinfect in the dark. They shined light on the fibers for 10 hours to simulate exposure to daylight and then stored them in the dark for various times. Even after 24 hours, the photocatalyst still killed bacteria. In fact, just a few minutes of illumination was enough to keep the photocatalyst activated for up to that length of time.

"Typically, when you have a photocatalyst, the activity will stop almost instantaneously when the light is switched off," Shang says. "The chemical species you generate will only last a few nanoseconds. This is an intrinsic drawback of a photocatalytic system, since you require light activation essentially all the time."

The palladium nanoparticles boost the photocatalyst's power in two ways. When photons hit the material, they create pairs of positive and negative charges--holes and electrons. The positively charged holes on the surface of the nitrogen-doped titanium oxide react with water to produce hydroxyl radicals, which then attack bacteria. "What palladium nanoparticles do is they grab electrons away so most of the holes you produce will be able to survive without being neutralized by electrons," says Shang.

As soon as they grab the electrons, the nanoparticles enter a different chemical state and store the negative charges. "When the light is switched off, that charge gets slowly released, and that slow release is what gives us that memory effect," Shang says. "That charge can react with water molecules to produce oxidizing agents again." He says nanoparticles of other transition metals, like silver, also enhance the photocatalyst's effectiveness.

The photocatalyst offers the ability to disinfect at full power during the day and then keep working at night or during power outages. Also, because the disinfection happens quickly, systems could be designed to clean large volumes of water by exposing it to light as the water flows through pipes, Shang says.

By Corinna Wu

From Technology Review

Potential New Class of Drugs to Combat Hepatitis C Identified

ScienceDaily (Jan. 25, 2010) — Stanford University School of Medicine scientists have discovered a novel class of compounds that, in experiments in vitro, inhibit replication of the virus responsible for hepatitis C. If these compounds prove effective in infected humans as well, they may dramatically accelerate efforts to confront this virus's propensity to rapidly acquire drug resistance, while possibly skirting some of the troubling side effects common among therapies in current use and in late-stage development.

"Hepatitis C virus, or HCV, is a huge problem," said Jeffrey Glenn, MD, PhD, associate professor of gastroenterology and hepatology, and director of Stanford's Center for Hepatitis and Liver Tissue Engineering. "It infects over 150 million people worldwide, many of whom don't even know they have it. Chronic hepatitis C infection is the No. 1 cause of liver cancer and liver transplantation in the United States."

Current treatments for hepatitis C, Glenn said, are only somewhat effective and often toxic. And designing a new antiviral agent is difficult, because a virus thrives by commandeering a host cell's own essential functions.

There are many effective drugs for diseases caused by bacteria. Bacterial cells, like our own, are fully functioning units. But they differ from mammalian cells in ways that make it feasible for them to be attacked with drugs that mostly leave our own cells alone. Antibiotics, which fight bacterial infections, have revolutionized the treatment of contagious disease.

Designing a clean antiviral drug is another story. Unlike bacteria, which multiply by dividing, a virus reproduces by breaking into cells and diverting their manufacturing machinery to produce copies of itself, which eventually depart the ravaged cell to find and exploit fresh ones.

HCV is an especially tough nut to crack. Natural isolates of it can't be grown in culture as can many other viruses, which seriously impedes drug and vaccine research. (There is still no vaccine for hepatitis C.) In recent years, virologists have developed surrogate systems that substantially duplicate the HCV replication process. These systems can be used to test compounds for effectiveness against the virus.

But even when a compound shows effectiveness, HCV mutates readily. So it can rapidly acquire drug resistance. The ultimate solution, Glenn said, is probably to "attack the virus from multiple angles all at the same time with a cocktail of compounds," each targeting a different item in the virus's toolkit. "It's imperative to identify new classes of potential drugs."

Glenn is the senior author of a study, appearing online Jan. 20 in Science Translational Medicine, in which he and his colleagues found a brand-new class of compounds capable of disrupting the HCV replication cycle. (The study's first author is Nam-Joon Cho, a postdoctoral scholar in Glenn's laboratory.) Importantly, the identified compounds do this by interfering with a virus-initiated activity that, while critical to viral replication, doesn't ordinarily occur in uninfected cells. This, Glenn said, offers the prospect of inhibiting this activity -- and stopping viral replication in its tracks -- with little or no toxicity to human cells.

Animal cells are composed mainly of water and water-soluble substances, enclosed within a fatty outer membrane and segregated into distinct subcellular compartments by internal membranes. HCV replicates only in association with such membranes. While some viruses cozy up to membranes that already exist inside living cells, HCV highjacks bits and pieces of membranes and assembles them into large clusters of tiny nested bubbles, or vesicles. The clusters are unlike anything found in a normal human cell.

Glenn and his associates identified a cylinder-shaped chunk of an HCV-encoded protein that is essential for that protein's known vesicle-aggregating activity. A synthetic version of this cylindrical section caused vesicles to aggregate into telltale clusters. The investigators then used this finding to attack the virus: they found that mutations in the synthesized segment destroyed the virus's ability to replicate.

While that insight was important, it alone did not translate into a therapy. "You can't treat a patient that way -- by going in, removing all the viruses, mutating them and putting them back in," Glenn said.

Instead, the researchers set out to detect compounds that could prevent this key protein segment from working. They designed an assay consisting of hundreds of separate tiny depressions in a plastic laboratory dish, with each depression housing large numbers of individual tiny vesicles in solution. As expected, sprinkling some of the synthesized protein segment into a depression caused the vesicles inside to clump together. But laborious testing of numerous off-the-shelf compounds -- a different one in each well -- showed that some prevented the aggregation from happening. Two of these compounds pronouncedly impaired HCV's ability to reproduce itself in the workhorse surrogate HCV replication system.

Glenn foresees a year to 18 months of extensive preclinical and animal testing before this class of compounds can gain approval by the Food and Drug Administration to enter all-important clinical trials in humans. Because the compounds operate by disrupting a mechanism that is only needed by the virus, he said, he hopes the drugs based on them may exhibit both viral replication-suppressing ability and a favorable toxicity profile.

The study was funded by Burroughs Wellcome Fund, the National Institutes of Health and the Stanford University CTSA and SPARK programs. Other Stanford study co-authors were Hadas Dvory-Sobol, PhD, Choongho Lee, PhD, Paul Bryson, PhD, Marilyn Masek and Menashe Elazar, PhD, of Glenn's laboratory; and Curtis Frank, PhD, the W.M. Keck, Sr. Professor in engineering and professor of chemical engineering. Cho, Dvory-Sobol, Lee, and Glenn have equity interests in Eiger BioPharmaceuticals Inc., a privately held, Palo Alto-based start-up to which Stanford has licensed intellectual property relevant to the new compounds.


NASA's Next Space Suit

If NASA returns to the moon in 2020 as planned, astronauts will step out in a brand-new space suit. It will give them new mobility and flexibility on the lunar surface while still protecting them from its harsh atmosphere. The suit will also be able to sustain life for up to 120 hours and will even be equipped with a computer that links directly back to Earth.

To infinity and beyond: David Clark Company is designing a new U.S. space suit for missions to the space station, moon, and Mars. It has interchangeable parts, so the arms, legs, boots, and helmet can be switched. The first configuration, shown here, is designed for launch, descent, and emergency activities, while the second design is meant for lunar exploration.
Credit: Brittany Sauser

The new design will also let astronauts work outside of the International Space Station (ISS) and will be suitable for trips to Mars, as outlined in NASA's program for exploration, called Constellation. "The current suits just cannot do everything we need them to do," says Terry Hill, the Constellation space suit engineering project manager at NASA's Johnson Space Center in Houston. "We have a completely new design, something that has never been done before."

NASA has proposed a plug-in-play design, so that the same arms, legs, boots, and helmets can be used with different suit torsos. "It's one reconfigurable suit that can do the job of three specialized suits," says Hill. The space agency has awarded a $500 million, 6.5-year contract for the design and development of the Constellation space suit to Houston-based Oceaneering International, which primarily makes equipment for deep-sea exploration. Oceaneering has partnered with the Worchester, MA-based David Clark Company, which has been developing space suits for the U.S. space agency since the 1960s.

The space shuttle astronauts currently wear two difference types of space suits. The Advanced Crew Escape Suit (ACES) is worn during the launch and ascent phases of flight. It is soft, fabric-based, and protects against the loss of atmospheric pressure or cold-water exposure in case of an ocean landing, and provides water cooling to regulate an astronaut's body temperature. The full assembly includes a survival pack, an emergency oxygen system, and a personal parachute so that astronauts can abort the shuttle during the landing phase.

Astronauts wear a second suit, called the Extravehicular Mobility Unit (EMU), when they perform tasks outside the confines of the shuttle or the ISS, such as adding solar panels to the space station or performing repairs. It has a hard upper torso, layers of material to protect astronauts from micrometeoroids and radiation, a temperature-regulation system, and its own life support and communication systems. The EMU weighs over 300 pounds and has limited leg mobility--astronauts' feet are normally locked in place on foot restraints while performing extravehicular tasks, and during Apollo missions, astronauts were forced to develop a bunny hop to traverse the lunar surface.

"When we went to the moon the first time, we were just trying to get there. Now astronauts need to be able to explore the surface, harvest resources, and do science," says Daniel Barry, vice president and director of research and development at David Clark Company, and head of the Constellation space suits project.

The new space suit will consist of two configurations. The first is similar to the current space shuttle escape suit, and it is designed for launch, reentry, and emergency operations in zero gravity. It's soft and allows for mobility in the event of pressure loss or in case crew members need to abort.

When existing space suits are pressurized, they tend to stiffen. For the Constellation suits, Barry's team has built in panels of material at the joints--shoulders, elbows, and knees--that keep the volume inside the suit constant, allowing astronauts to easily move. David Clark engineers are also developing breathable materials for the suit, making them more comfortable than the conventional urethane- or neoprene-coated nylon fabrics.

The second configuration of the Constellation space suit, which will be used for lunar excursions, uses the same arms, legs, boots, and helmet. These are snapped onto a new reinforced torso equipped with life support, electronics, and communication systems. Astronauts will also put on an outer garment to protect them from the harsh lunar atmosphere, including micrometeorites. Engineers are also working on enhanced materials to combat the very fine lunar dust, which, as NASA learned from the Apollo missions, can be problematic and hazardous to the crew.

The new design will eliminate many of the hard elements that add weight to current space suits and can injure the crew in the event of a rough landing. Instead, engineers are using lightweight composite structures. Furthermore, astronauts will be able to get in and out of the suit more quickly through a rear zippered entrance. The current suits are made of two pieces that take three hours and a helping hand to put together.

Barry says that a single modular suit will be cheaper to manufacture and will reduce launch mass and logistical complexity. David Clark Company has built an early prototype that will undergo testing next week at NASA with the new crew exploration vehicle, called Orion, which is also being developed for the Constellation program.

Hill says the first completed suit will be ready for testing in September, and the final suit design will be ready by 2013 and ready for flight in 2015. The lunar suit will incorporate OLED displays and a computer, and will act like a node on the Internet, relaying data back to Earth.

In the coming weeks, the Obama administration will make a decision on the future of U.S. human spaceflight, which could significantly change the direction of the Constellation program. "The bottom line is that if we are going to do manned missions, we need a new space suit," says Hill. And, he adds, "we have made the suit modular for that reason; if they decide to skip the moon and go to Mars, it does not change our architecture."

By Brittany Sauser

From Technology Review

New Life for Magnetic Tape

Music lovers may have long forsaken them, but magnetic tapes still reign supreme when it comes to storing vast amounts of digital data. And new research from IBM and Fujifilm could ensure that tape remains the mass storage medium of choice for years to come for at least a decade.

Tape deck: The read-write machine used to demonstrate a new magnetic tape technology developed by IBM and Fujifilm.

At IBM's Zurich Research Laboratories in Switzerland, researchers have developed a new tape material and a novel tape-reading technology. In combination, they can store 29.5 billion bits per square inch, which translates to a cartridge capable of holding around 35 terabytes of data--more than 40 times the capacity of cartridges currently available, and several times more than a hard disk of comparable size.

The researchers used a relatively new magnetic medium, called barium ferrite. In cooperation with researchers from Fujifilm's labs in Japan, they orientated the barium ferrite magnetic particles so that their magnetic fields protrude perpendicularly from the tape, instead of lengthways. This means that more bits can be crammed into a given area, and the magnetic fields are stronger. Furthermore, these particles allow thinner tape to be used, meaning12 percent more tape can be stored on a single spooled cartridge.

Increasing the density of data that can be stored on a tape makes it more difficult to reliably read information. This is already a problem because of electromagnetic interference and because the heads themselves will retain a certain amount of residual magnetism from readings. To overcome this, the IBM group developed new signal processing algorithms that simultaneously process data and predict the effect that electromagnetic noise will have on subsequent readings.

Hard disks can store more data on a given surface area than magnetic tape, and the data on a disk can be read faster. But because hundreds of meters of tape can be spooled on a single cartridge, the overall volumetric data density of tape is higher, says Evangelos Eleftheriou, head of the Storage Technologies group at IBM Zurich.

Crucially, tape storage is also much cheaper. "What's most important is the cost per gigabyte," says Eleftheriou. Solid state drives cost between $3 and $20 per gigabyte. In contrast, it costs less than a cent per gigabyte to store information on magnetic tape. In the third quarter of 2009, the global tape market was worth more than half a billion dollars.

Extending the life of magnetic tape technology could delay the arrival of new storage technologies, particularly holographic storage. Experimental holographic discs, which use patterns of light interference to hold multiple pieces of data at a single point, can already hold several hundred gigabytes of data. The technology is expected to eventually allow terabytes of data to be held on a disc.

"Tape still wins, but only at very high data volumes," says James Hamilton, a vice president and distinguished engineer on Amazon's Web services team, in Bellevue, WA. Tape is most suitable for "cold storage"--when data is not accessed frequently. But the volume of digital data that needs to be stored is increasing rapidly, so Hamilton says there's a real need to try to squeeze more out of tape.

It could take another five years before the new tape technology is ready for the market, Eleftheriou admits. "But we have shown that there is still at least another 10 years of life in it," he says.

By Duncan Graham-Rowe

From Technology Review

Power of genomics cracks soybean code

PARIS (AFP) – Scientists on Wednesday unveiled the genome of the soybean, saying it was an achievement that should deepen understanding of one of the world's most important crops, help to boost yields and defend the plant against pests.

The study, published by the British weekly science journal Nature, provides a springboard for research into soy's DNA structure and protein-making machinery, its authors said.

Eighteen organisations, most of them American, teamed up in a 15-year endeavour that yielded a draft of 85 percent of the soybean's 1.1 billion base pairs, the "rungs" in the double-helix ladder of DNA.

AFP/Nature – Scientists on Wednesday unveiled the genome of the soybean, saying it was an achievement that should

"Soybean and other legumes play a critical role in global food security and human health and are used in a wide range of products, from tofu, soy flour, meat substitutes and soy milk to soy oil-based printing ink and biodiesel," said Molly Jahn, deputy under secretary at the US Department of Agriculture.

"This new information about soybean's genetic makeup could lead to plants that produce more beans that contain more protein and oil, better adapt to adverse environmental conditions or are more resistant to diseases," she said in a press release.

More than 46,000 soy genes have been identified, including key genes involved in the transformation of water, sunlight, carbon dioxide, nitrogen and minerals into energy and proteins.

One early breakthrough is the discovery of a gene that appears to confer resistance to a disease called Asian soybean rust, which can devastate up to 80 percent of a harvest.

Another, more futuristic, benefit could be in a next-generation form of biodiesel.

More than 1,000 genes involved in lipid metabolism have been spotted, said one of the researchers, Gary Stacey of the National Center for Soybean Biotechnology at the University of Missouri.

"These genes and their associated pathways are the building blocks for soybean oil content and represent targets that can be modified to bolster output and lead to the increase of the use of soybean oil for biodiesel production."

Biotechnologists have already unravelled the genome of rice, corn and the grape vine among other staples.


Cleaner Jet Fuel from Coal

The Air Force is testing a jet fuel made from coal and plant biomass that could replace petroleum-based fuel and emit less carbon-dioxide compared to using conventional jet fuels. The fuel is made with a process developed by Accelergy, based in Houston, using technology licensed from ExxonMobil Research and Engineering Company and the Energy and Environmental Research Center at the University of North Dakota.

Planting jet fuel: Ben Oster, a research engineer at the Energy and Environmental Research Center at the University of North Dakota, holds a sample of jet fuel made from plant oils. Accelergy has licensed the technology used to make this fuel.

Other recently tested experimental biofuels for jets have required that the aircraft still use at least 50 percent petroleum-based product to meet performance requirements, particularly for the most advanced military jets. But the Accelergy process produces fuels that closely resemble petroleum-based fuels, making it possible to do away with petroleum altogether. Because of this, the new process could help the Air Force meet its goal of using domestic, lower-carbon fuels for half of its fuel needs by 2016. Although the first products will be jet fuels, the process can also be adapted to produce gasoline and diesel.

The fuel has passed through an initial round of testing, including lab-scale engine tests, and is on track to be flight-tested in 18 months, says Rocco Fiato, vice president of business development at Accelergy.

Turning coal into liquid fuels is nothing new, but such processes have been inefficient and produced large amounts of CO2 emissions. Accelergy's approach is different because it uses "direct liquefaction," which is similar to the process used to refine petroleum. It involves treating the coal with hydrogen in the presence of a catalyst. Conventional technology for converting coal to liquid fuels breaks the coal down into synthesis gas, which is mostly carbon monoxide with a little bit of hydrogen; the hydrogen and carbon are then recombined to produce liquid hydrocarbons, a process that releases carbon dioxide. Because the Accelergy process skips the need to gasify all of the coal--which consumes a lot of energy--before recombining the hydrogen and carbon, it's more efficient and produces less carbon dioxide. "We don't destroy the molecule in coal. Instead we massage it, inject hydrogen into it, and rearrange it to form the desired hydrocarbons," says Timothy Vail, Accelergy's president and CEO. The hydrogen for Accelergy's process comes from two sources--coal and biomass. Accelergy gasifies a portion of the coal they use--about 25 percent of it--as well as cellulosic biomass, from sources such as plant stems and seed husks, to produce syngas. The company then treats the syngas with steam. In this reaction, carbon monoxide reacts with water to form hydrogen and carbon dioxide. Using biomass reduces the net carbon-dioxide emissions, since the biomass absorbed CO2 from the atmosphere as the original plants grew.

The technology also uses biomass in another way. The company processes seed crops, such as soybeans or camelina, which contain large amounts of oil. After extracting that oil (which leaves behind cellulosic materials that are gasified), the oil is processed to remove oxygen atoms, forming long chains of straight hydrocarbon molecules. These are then treated to make the straight molecules into branch-like molecules that remain liquid at lower temperatures, making them useful in jet fuel.

The use of biomass reduces net carbon dioxide emissions, but so does the fact that direct liquefaction is more efficient than conventional gasification, says Daniel Cicero, the technology manager for hydrogen and syngas at the U.S. Department of Energy's National Energy Technology Laboratory (NETL), in Morgantown, WV. In gasification, only about 45 percent of the energy in the coal is transferred to the fuel produced. Accelergy claims efficiencies as high as 65 percent using direct liquefaction. Yields of fuel are also higher. Gasification methods produce about two to 2.5 barrels of fuel per ton of coal. Direct liquefaction produces over three barrels per ton of coal, and adding the biomass brings the total to four barrels per ton of coal.

All told, Fiato says, gasifying coal to produce liquid fuel produces 0.8 tons of carbon dioxide per barrel of fuel, while Accelergy's process produces only 0.125 tons of CO2 per barrel. That makes it competitive with petroleum refining, especially the refining of heavier forms of petroleum. (The fuels produce about the same amount of carbon dioxide when they're burned.)

In addition to reducing carbon emissions compared to conventional coal to liquids technology, a key advantage of the process is the ability to make high-quality jet fuels. The direct liquefaction of coal produces cycloalkanes, looped molecules that have high energy density, giving airplanes greater range. They are also stable at high temperatures, allowing them to be used in advanced aircraft.

One drawback to the process is that it costs more than refining petroleum. Indeed, Cicero says that an NETL study of coal and biomass to liquid fuels technology suggests it would not be competitive until petroleum prices stay above $86 to $93 a barrel. (The study was based on conventional gasification processes.) He says that supplying fuel to the Air Force could sustain one or two small Accelergy plants, but to move beyond this would require a price on carbon-dioxide emissions of about $35 a ton.

By Kevin Bullis

From Technology Review

DNA sweep finds new genes linked to diabetes

PARIS (AFP) – Hundreds of scientists sifting through genetic data from 122,000 people have quintupled the number of gene variants known to boost the risk of diabetes, a pair of studies released Sunday reported.

A consortium of researchers first isolated 10 gene mutations that help determine the body's ability to regulate blood sugar and insulin levels, the key factors underlying type 2 diabetes.

In a companion study, the same consortium -- pooling the resources of more than 100 institutions in Europe, the United States, Canada and Australia -- determined that two of these newly-identified variants directly influenced the risk of diabetes.

AFP/HO/File – This undated illustration shows the DNA double helix. Hundreds of scientists sifting through genetic …

It also fingered an additional three genetic culprits that had already been linked to changes in glucose levels.

"Only four gene variants had previously been associated with glucose metabolism, and just one of them was known to affect type 2 diabetes," said Jose Florez, a researcher at Massachusetts General Hospital and co-lead author of one of the studies.

"Finding these new pathways can help us better understand how glucose is regulated, distinguish between normal and pathological glucose variations, and develop potential new therapies," he said in a statement.

One gene in particular, known as GIPR-A, was found to play a prominent role.

Usually, GIPR-A produces a protein that is part of the normal hormone response to eating, stimulating the release of insulin to control sugar levels in the blood.

The mutated version, however, impairs this response, resulted in elevated glucose.

Diabetes occurs when our bodies fail to produce sufficient insulin or when our cells fail to recognise and react to the insulin produced, resulting in abnormally high blood sugar levels.

The MAGIC -- Meta-Analysis of Glucose and Insulin-related traits Consortium -- investigators noted that other genetic factors related to diabetes remain to be found.

"We've still only identified about 10 percent of the genetic contribution to glucose levels in nondiabetic individuals, so we need to investigate the impact of other possibly more complex or rare forms of gene variation, along with the role of gene-environment interaction, in causing type 2 diabetes," said Florez.

Diabetes is closely linked to lifestyle, especially the kinds and quantity of food we consume.

More than 220 million people worldwide are afflicted with the disease, which kills more than one million people every year, according to the World Health Organization (WHO).

As obesity rates increase, the number of deaths could double between 2005 and 2020, the WHO has said.

Both studies were published online in the journal Nature Genetics.


Clinton Pressures China over Google Attack

In a speech given yesterday at the Newseum in Washington, D.C., U.S. Secretary of State Hillary Clinton put pressure on the Chinese government to address the cyber attacks revealed recently by Google. For much of the speech, which focused largely on promoting Internet freedom, Clinton avoided mentioning China specifically. But her comments condemned Internet censorship and cyber attacks in no uncertain terms.

Challenging China: Speaking at the Newseum in Washington, D.C., U.S. Secretary of State Hillary Clinton said no company should tolerate government intrusion into internet freedom.

Clinton's remarks paint the U.S. vision of the Internet in stark contrast to China's. In her talk, Clinton stressed the benefits of enforcing the principles of freedom of expression, assembly, and universal access online. In contrast, China has a reputation for routinely blocking access to politically sensitive content and gathers information on dissidents via their Internet communications.

Clinton also addressed Google's disclosure directly. "We look to the Chinese authorities to conduct a thorough review of the intrusions that led Google to make its announcement," Clinton said.

Clinton sharply criticized Internet censorship and companies that cooperate with it. "Censorship should not be in any way accepted by any company from anywhere," she said, warning that efforts to limit information flow create a less useful, fragmented Internet. In particular, she said that "unfettered access to search engine technology is so important in individual lives."

She also called for more cooperation across jurisdictions when fighting Internet crime. "Countries or individuals that engage in cyber attacks should face consequences and international condemnation," she said.

Although Google has not released details of the attacks it detected, security researchers have begun piecing information together. Though the search giant stopped short of blaming the Chinese government directly for the attacks, its decision to end cooperation with state censorship requests strongly implies that the company suspects government involvement.

Independent researchers have also begun gathering evidence that pinpoints the source of the attacks. Joe Stewart, director of malware research for the counter threat unit at an Atlanta-based security company called SecureWorks, went public this week with research suggesting a link between the malware used in the attack and research into algorithms posted on Chinese-language websites.

Stewart was analyzing the Hydraq Trojan, the worm believed to be responsible for accessing internal corporate networks at the companies that were attacked, when he found that the software used an unfamiliar algorithm to check for errors in stored or transferred data. Stewart investigated it and found that this particular implementation had only been described on Chinese-language sites, suggesting a link to hackers in mainland China.

Stewart notes that "reverse-engineering an executable binary is never conclusive," but adds that the Trojan's behavior also fits with that of other attacks that originated from China. However, he says he hasn't noticed any features of the malware that suggest sophistication beyond other recent attacks.

After penetrating a system through some vulnerability, Stewart says, the Trojan installs itself to the system and tries to phone home to a control server. Once it's connected, it can gather files and information about the network, and even take control of local systems.

Some researchers have suggested that the recent attacks were likely similar to "GhostNet," a cyber-spying operation originating in China that was said to have targeted the Dalai Lama and other human-rights activists. For that series of attacks, hackers sent target users carefully crafted e-mails containing personal information in an attempt to convince them to click a malicious link or open an attachment loaded with malware.

Last week, the security company McAfee released news that a flaw in Microsoft's Internet Explorer had opened the door to installing malware on some of the affected networks. Microsoft also issued a patch yesterday to close this flaw.

But some researchers have said that it remains unclear exactly how the company networks were attacked. Evgeny Morozov, a Yahoo! fellow at Georgetown University's E.A. Walsh School of Foreign Service, says there is no entirely coherent explanation of events. The flaw in Internet Explorer alone would not have provided complete access, Morozov says. He notes that there were likely many other important features of the attacks, including how networks and files were configured. Some have even speculated that the attackers could have had help from workers within Google.

Amichai Shulman, CTO of Imperva, a data-security company based in Redwood Shores, CA, agrees that too much attention has been placed on the flaw in Internet Explorer. "Most botnets and malware don't rely on a single vulnerability for infection," he says. "They usually try to exploit two or three vulnerabilities at the same time."

Even if Google pulls its operations out of China, it will still face Internet security threats, Morozov says. Revealing the cyber attacks may have given the company U.S. government support and a way out of a difficult censorship situation, he says, but "cyber attacks have become a daily nuisance that every company has to deal with. As long as Google offers important services like e-mail, it will still be a target."

By Erica Naone

Tracking a Superbug with Whole-Genome Sequencing

By sequencing the entire genome of numerous samples of the notorious MRSA (Methicillin-resistant Staphylococcus aureus) bacteria--a drug-resistant strain of staph responsible for thousands of deaths in the United States each year--researchers at the Wellcome Trust Sanger Institute in the United Kingdom have gained clues as to how the superbug travels both around the globe and in local hospitals. Scientists say the approach will shed light on the epidemiology of the troublesome bacteria and help public health programs target their prevention efforts most effectively.

Bug decoding: By sequencing the genomes of numerous samples of the drug-resistant bacterium MRSA (shown here in yellow), scientists have confirmed where and when the superbug emerged.

The research, which would have been impossible just two years ago, was enabled by fast and inexpensive sequencing technology from Illumina, a genomics company based in San Diego. "The work demonstrates the value of applying high-resolution sequencing technology to public health problems," said Caroline Ash, a senior editor at the journal Science, where the research was published, at a press conference on Wednesday. "Potentially the technology could pinpoint the origin of the outbreak and the origin of its spread."

About 30 percent of people carry Staphylococcus aureus bacteria on their skin, often harmlessly. But for some people, the microbes can cause severe problems, including serious skin infections, sepsis, and death. Antibiotic-resistant strains of the virus emerged in the 1960s, and these now account for more than half of all hospital-acquired infections in the U.S.

In the current study, researchers sequenced 63 MRSA samples, some collected from across the globe during a 20-year period, and some from a single hospital in Thailand over 20 months. While standard analysis methods, which analyze only a small portion of the microbes' DNA, classified each isolate as being of the same subtype, sequencing the whole genome allowed scientists to identify very small genetic differences between the microbes.

The researchers constructed an evolutionary tree for the microbes and confirmed that MRSA likely first emerged in Europe in the 1960s, coinciding with the growing use of antibiotics to treat staph infections. The tree also showed that the superbugs evolved drug resistance multiple times over the past 40 years. "That demonstrates there is immense selective pressure caused by antibiotic use worldwide," said Simon Harris, the lead author on the study, at the press conference.

The researchers also analyzed minor genetic differences in MRSA samples collected in a much more localized setting--a single hospital in Thailand--and discovered greater than expected diversity among the microbes. That suggests that patients were infected by new strains coming into the hospital, rather than patient-to-patient transmissions, says Harris. That finding might affect control measures. "If you institute infection control settings in this hospital, it will only have limited success because, in this case, it looks like patients appear to be getting the infection from other sources," said Sharon Peacock, a clinical microbiologist at the University of Cambridge who also participated in the research, at the press conference.

(The analysis will not affect how individual patients are treated, because all variants of MRSA are treated the same. Tests that classify subtypes of the bacteria are used to track the spread of infection, rather than to make treatment decisions.)

It's not yet clear how the technology might be incorporated into standard public health efforts to track and control MRSA infections. "I believe this approach will expand our understanding of the evolution of MRSA, but I don't think it will catch on right now in hospitals," says Dan Diekema, a physician and epidemiologist at the University of Iowa, who was not involved in the study. "I think most hospitals find our current typing methods to be adequate in helping to guide prevention efforts. It's not exactly clear to me how having a finer-grained look would impact prevention efforts."

Peacock aims to address that question with ongoing studies. "We hope to clearly define through prospective studies of this tool what its value is and how to incorporate it into a hospital setting," she said. The technology would need to undergo some major changes to transform from research tool to standard surveillance measure. "Before being adopted in standard clinical practice, the technology needs to be adapted so that it can be used in any major laboratory," said Peacock. "We want the readout not in terms of sequencing but in what potential virulence it carries, and what its origins might be."

Currently, sequencing a MRSA genome takes four to six weeks and costs about $300. "Even though turnaround time has been dramatically reduced from years to weeks, it's still not a practical timescale for use in clinical settings," said Stephen Bentley, senior author on the study. "But I expect that with third-generation sequencing technology, the turnaround time could be reduced to hours and the per sample cost might be reduced to the £20 mark [about $30]."

The project is part of a growing trend that capitalizes on increasingly affordable sequencing technologies to track the origins, evolution, and migration of human pathogens. "It's going to require scientists to do some real innovative thinking to fully explore the potential of this technology," said Bentley. Researchers at the Sanger Institute have already applied it to a number of other infectious organisms, including those responsible for tuberculosis, pneumonia, and meningitis.

By Emily Singer

Made-to-Order Heart Cells

Last month, Madison, WI-based Cellular Dynamics International (CDI) began shipping heart cells derived from a person's own stem cells. The cells could be useful to researchers studying everything from the toxicity of new or existing drugs to the electrodynamics of both healthy and diseased cardiac cells. DI's scientists create their heart cells--called iCell Cardiomyocites--by taking cells from a person's own blood (or other tissue) and chemically reversing them back to a pluripotent state. This means they are able to grow or can be programmed to grow into any cell in the body.

Heartbeats: Spontaneously beating iCell Cardiomyocyte cells.

The science comes from the lab of CDI cofounder and stem-cell pioneer James Thompson of the University of Wisconsin. In 2007, his lab published a study led by postdoc Junying Yu in the journal Science that detailed how to reverse virtually any human cell back to an undifferentiated state known as an induced pluripotent stem cell, or IPS cell. (Japanese physician and geneticist Shinya Yamanaka also created IPS cells from humans and published details in the journal Cell in 2007.)

"One of the biggest advantages of these cells is we can make them in quantity and on demand," says CDI CEO Robert Paley. "Before, you had to get heart cells from a cadaver, so there was a limited supply."

Paley and CDI's chief commercial officer, Chris Kendrick-Parker, discussed the stem-cell-derived cardiac cells at the JP Morgan Healthcare Conference in San Francisco last week. A customer receives all of the different types of heart cells in a vial about the size of the tip of a little finger; some of the 1.5 million to 5 million heart cells in the vial can be induced to pulse when placed in a petri dish.

CDI designed the cardiac cells primarily to aid drug discovery and to help predict the efficacy and toxicity of different drugs. Other tests might include screens to determine if there are differences in how various ethnicities and other genetic subpopulations respond to drugs--such populations can be at higher risk for side effects from drugs, and at a higher risk for the drugs simply not working. Some researchers also plan to see how cells derived from patients with different types of heart disease respond to particular drugs.

"Using these cells to find out which drugs work on an individual's cells based on their genetics is a very promising new technology," says geneticist Leroy Hood of the Institute for Systems Biology in Seattle, "though we have yet to see how promising."

The cells provided by CDI also offer a ready supply for scientists conducting basic research in how cardiac cells function. But they do not come cheap. Paley says they cost about $1,000 for a vial, compared to about $800 for cells from cadavers, though the latter are not as readily available, and they don't beat in a petri dish, limiting researchers' ability to study the electrodynamics of heart cells.

IPS-derived cells also have the potential to become a powerful predictive tool when combined with genetic profiling that identifies genetic predispositions to adverse reactions to drugs. Cells derived from people with DNA markers giving them a high risk for side effects from a drug can be tested to determine if the risk is real before they ever take the medication.

"Before CDI, these cells were very difficult to obtain, and we would only get tiny amounts," says stem cell biologist Sandra Engle, a senior principal scientist at Pfizer. "This doesn't work for high-throughput testing for drugs." Researchers use high throughput processes to test up to thousands of different drug compound candidates to see which work and are safe.

The cells also allow researchers to test cells from the same stem-cell stock over time as they develop drugs, which can take years, explains Engle. "We could not do that before." Researchers also can check drugs on cells from different types of patients to see if there are different reactions, and study why some cells become diseased.

IPS cells are genetically and immunologically compatible with the person who provided the original cells. This means that cardiac and other cells produced from IPS cells won't be rejected by a person's immune system, always a possibility with cells that come from donors, animals, or cadavers.

Because of this, IPS-derived cells for the heart and other tissue may one day be a perfect genetic match for effecting repairs to damaged tissue in the heart and elsewhere, though IPS-derived cells cannot yet be used as spare parts. "Therapies using these cells are still a long way to go," says Paley. "We don't know yet know how to graph them to grow in human tissue."

However, personalized drug testing has the potential to become a powerful predictive tool combining genetic profiling with cell-toxicity screening. It could help determine adverse reactions to drugs in genetically high-risk individuals before they ever take a given drug.

"This is a game changer," says Engle."It's going to dramatically change biology and drug development."

By David Ewing Duncan

Climate Change Authority Admits Mistake

One of the most alarming conclusions from the Intergovernmental Panel on Climate Change (IPCC), a widely respected organization established by the United Nations, is that glaciers in the Himalayas could be gone 25 years from now, eliminating a primary source of water for hundreds of millions of people. But a number of glaciologists have argued that this conclusion is wrong, and now the IPCC admits that the conclusion is largely unsubstantiated, based on news reports rather than published, peer-reviewed scientific studies.

Still there: The Khumbu glacier, in front of Mount Everest, is one of the longest glaciers in the world. Though the Himalayan glaciers are being affected by global warming, they won’t disappear in 25 years, as the authors of a recent report by the Intergovernmental Panel on Climate Change incorrectly predicted.

In a statement released on Wednesday, the IPCC admitted that the Working Group II report, "Impacts, Adaptation and Vulnerability," published in the IPCC's Fourth Assessment Report (2007), contains a claim that "refers to poorly substantiated estimates. " The statement also said "the clear and well-established standards of evidence, required by the IPCC procedure, were not applied properly." The statement did not quote the error, but it did cite the section of the report that refers to Himalayan glaciers. Christopher Field, director of the Carnegie Institution's Department of Global Ecology, who is now in charge of Working Group II, confirms that the error was related to the claim that the glaciers could disappear by 2035.

The disappearance of the glaciers would require temperatures far higher than those predicted in even the most dire global warming scenarios, says Georg Kaser, professor at the Institut für Geographie der Universität, Innsbruck. The Himalayas would have to heat up by 18 degrees Celsius and stay there for the highest glaciers to melt--most climate change scenarios expect only a few degrees of warming over the next century.

The mistake has called into question the credibility of the IPCC, which has been considered an authoritative source for information about climate change because of its policy of carefully reviewing and analyzing hundreds and even thousands of published, peer-reviewed scientific studies. But the scientists who uncovered the error say that the mistake, and the reliance on news reports and unpublished studies, is rare. "I don't think it ought to affect the credibility of the edifice as a whole," says J. Graham Cogley, professor of geography at Trent University, who was key to identifying the original sources of the information in the IPCC report.

The error has been traced to the fact that the IPCC permits the citation of non-peer-reviewed sources, called "grey literature," in cases where peer-reviewed data is not available. It requires that these sources be carefully scrutinized, but that didn't happen in this case. The process has "gone spectacularly wrong in this particular instance," Cogley says.

That claim went unchallenged during the normal, multistep review process used by the IPCC. The error wasn't widely noticed until last November, when a group of scientists began discussing a new study of the Himalayan glaciers. The discussion led Cogley to look up the original sources for the claims in the IPCC report. He found two sources, a news report in the London-based magazine New Scientist about an as-yet unpublished study, and an article that estimated the glaciers could shrink to one-fifth their current area by 2350, rather than 2035, putting the IPCC report off by about 300 years. His finding is described in a letter to the editor to be published January 29 in the journal Science. Field confirms that the New Scientist was one of the sources, but not the report giving the date of 2350.

David Victor, the director of the Laboratory on International Law and Regulation at the University of California, San Diego, says that the error should not lead to a major change in the IPCC's process. "A very small fraction of IPCC reports stems from grey literature," and an outright ban on such sources would be a bad idea, since it would prevent the organization from drawing on certain types of valuable information. For example, government reports, or even raw data on greenhouse gas levels or measurements of the extent of glaciers, are often not a part of peer-reviewed literature.

Cogley recommends two main changes. First, all sources cited should be readily available to reviewers, which would have made it easier to see that the source for the 2035 date was a news report. (The IPCC report does not cite the New Scientist, but rather another document, which in turn cited the New Scientist.) Second, he says, researchers working on different parts of the IPCC report should work together more closely. Kaser says that if during the normal review process even one glaciologist of the many who worked on the Working Group I report ("The Physical Science Basis") would have carefully read the Working Group II report, the error would have been caught.

(The claim that the Himalayan glaciers could disappear within 25 years was included in the recent Technology Review feature "The Geoengineering Gambit." A correction can be found here.)

By Kevin Bullis

New Compound Improves MRI Contrast Agents

Magnetic resonance imaging (MRI) has become an indispensable medical diagnostic tool because of its ability to produce detailed, 3D pictures of tissue in the body. Radiologists often inject patients with contrast agents to make certain tissues, such as tumors, stand out more on the final image. Now, researchers have synthesized an MRI contrast agent that is 15 times more sensitive than the compounds currently used. This could allow less contrast agent to be used, thus reducing the potential for harmful side effects.

Nano pyramid: Gadolinium ions (green) are chemically linked to the surface of a nano diamond to form an MRI contrast agent 15 times more sensitive than current ones.

The researchers created the new compound by chemically linking gadolinium ions to nanodiamonds--tiny clusters of carbon atoms just a few nanometers in diameter. Gadolinium, a rare-earth metal, is used in MRI contrast agents because of its strong paramagnetic properties (magnetism in response to an applied magnetic field). But alone, gadolinium is toxic, so it has to be bonded to other, biocompatible molecules to be used clinically. Many groups have been trying to improve the properties of gadolinium-based contrast agents by attaching the metal to a variety of materials, ranging from large organic molecules to nanoparticles.

"We've done this with many classes of nanoparticles and have never seen this extraordinary increase in sensitivity," says Thomas J. Meade, professor of chemistry and director of the Center for Advanced Molecular Imaging at Northwestern University. He and his colleagues published their findings online in Nano Letters last month.

Meade collaborated with Dean Ho, assistant professor of biomedical and mechanical engineering at Northwestern, and his group, which has been studying nanodiamonds as vehicles for drug delivery. Ho says that nanodiamonds, unlike some carbon nanomaterials, are well-tolerated by cells and do not change cells' gene expression in undesirable ways. The researchers coupled the nanodiamonds to gadolinium and tested the properties of the resulting complex to assess how good of an MRI contrast agent it might be.

MRI works by surrounding a patient with a powerful magnetic field, which aligns the nuclei of hydrogen atoms in the body. Radio wave pulses systematically probe small sections of tissues, knocking those atoms out of alignment. When they relax back into their previous state, the atoms emit a radio frequency signal that can be detected and translated into an image.

Because of its strong paramagnetic properties, gadolinium alters the relaxation of hydrogen atoms when it's nearby. Contrast agents containing gadolinium can be designed to collect preferentially in tumors, thus enhancing the contrast between the tumor and the surrounding tissue. The contrast agent's ability to alter the relaxation of hydrogen atoms is expressed as "relaxivity", which accounts for the relaxation time and the concentration of gadolinium in the tissues.

The high relaxivity of the gadolinium-nanodiamond compound can be partly attributed to its ability to attract water, which helps boost the MRI signal. "If you look at the shape of a nanodiamond, it's like a soccer ball but more angular around the faces," Ho says. "It's not totally round." The different faces have alternating positive and negative charges, which helps to orient water molecules opposite to each other, creating a tight shell of water around the nanodiamond.

The researchers tested the gadolinium-nanodiamond on different types of cells in the lab and did not find evidence of toxicity. The next step, Ho says, is to test the compound's safety and effectiveness as a contrast agent in animals. "We're excited to see what kind of increased performance we can get," Ho says.

"I think it's a very interesting system," says Kenneth N. Raymond, professor of chemistry at the University of California at Berkeley. "They've obviously got a one order of magnitude increase in relaxivity that's quite significant." Many researchers have tried to attach gadolinium ions to high molecular weight compounds, like proteins and dendrimers, he says. "The little nanodiamond, as far as I know, is quite novel, and, I think, a very clever thing to do."

Currently, radiologists need to inject what amounts to several grams of gadolinium into a patient to get good contrast on an MRI. By increasing the sensitivity of the contrast agent ten-fold, "you could use one-tenth as much gadolinium," Raymond says. "There's a lot of concern in the clinic for certain classes of patients about gadolinium toxicity. Toxicity is very closely connected to dose."

By Corinna Wu

A Synchronous Clock Made of Bacteria

It's not your typical clock. Rather than a quartz movement and sweeping second hand, the heart of this device is a colony of genetically engineered bacteria. A deceptively simple circuit of genes allows the microorganisms to keep time with synchronized pulses of fluorescent light, beating with a slow, rhythmic flicker of 50 to 100 minutes.

Bacterial clock: Scientists have engineered bacteria to glow in synchronous waves, as shown in this still image from a video. The genetic circuit might one day be used to detect toxins or deliver drugs.

The bacteria represent the first synchronized genetic oscillator. Scientists say the tool will be foundational for synthetic biology, an offshoot of genetic engineering that attempts to create microorganisms designed to perform useful functions. The oscillator might one day provide the basis for new biosensors tuned to detect toxins, or for cellular drug delivery systems designed to release chemicals into the body at preprogrammed intervals.

Oscillators are an integral part of the biological world, defining cycles from heartbeats to brain waves to circadian rhythms. They also provide a vital control mechanism in electronic circuits. Biologists first set out to engineer a biological version more than a decade ago, creating a circuit dubbed the "repressilator." (The creation of the repressilator, along with a genetic on-off switch, in 2000 is generally considered the birth of synthetic biology.) However, early oscillators lacked precision--the rhythm quickly decayed, and its frequency and amplitude couldn't be controlled.

In 2008, Jeff Hasty and his team at the University of California, San Diego, created a more robust oscillator that could be tuned by the temperature at which the bacteria were grown, the nutrients they were fed, and specific chemical triggers. But the oscillations were still limited to individual cells--the bacteria did not flash together in time. In the new research, published today in the journal Nature, Hasty and colleagues build on this work by incorporating quorum-sensing, a molecular form of communication that many bacteria use to coordinate their activity.

The new oscillator consists of a simple circuit of two genes that creates both a positive and negative feedback loop. The circuit is activated by a signaling molecule, which triggers the production of both more of itself and of a glowing molecule called green fluorescent protein. The signaling molecule diffuses out of the cell and activates the circuit in neighboring bacteria.

The activated circuit also produces a protein that breaks down the signaling molecule, providing a time-delayed brake to the cycle. The dynamic interactions of different parts of the circuit in individual and neighboring cells create regular pulses of the signaling molecule and the fluorescent protein, appearing as a wave of synchronous activity. It's "a feat analogous to engineering all the world's traffic lights to blink in unison," wrote Martin Fussenegger, a bioengineer at the Swiss Federal Institute of Technology, in Zurich, in a commentary accompanying the paper in Nature.

The colonies of bacteria are grown in a custom-designed microfluidics chip, a device that allows scientists to precisely control the conditions the microorganisms are exposed to. Changing the rate at which nutrients flow into the chip alters the period of the oscillations, says Hasty.

"The ability to synchronize activity among cells in a population could be an important building block for many applications, from biomedicine to bioenergy," says Ron Weiss, a former TR35 winner and a bioengineer at MIT who was not involved in the research. For example, the bacteria could be engineered to detect a specific toxin, with the frequency of the fluorescence indicating its concentration in the environment. While a microscope is currently needed to read the output, Hasty's team is now working on a version that can be seen with the naked eye.

The oscillator could also be used to deliver drugs, such as insulin, that function best when dosed at certain intervals. "In the future you could think of implants that produce a therapeutic effect," says Fussenegger. The dosing of the drug would relate to the strength or amplitude of the oscillation, while the timing of the dosing would be determined by its frequency. "There would be nothing to worry about for the patient," he says.

Researchers are now trying to make the system more robust, as well as to extend the timescale over which it can synchronize activity. They also want to combine it with previous genetic oscillators, and transfer it into different cell types that might be suited to different biotech applications.

By Emily Singer

Solar Shingles See the Light of Day

Dow Chemical is moving full speed ahead to develop roof shingles embedded with photovoltaic cells. To facilitate the move, the U.S. Department of Energy has backed Dow's efforts with a $17.8 million tax credit that will help the company launch an initial market test of the product later this year.

Sunny future: Dow Chemical hopes to transform the solar power industry by integrating solar cells with conventional roofing shingles .

In October 2009, the chemical giant unveiled its product, which can be nailed to a roof like ordinary shingles by roofers without the help of specially trained solar installers or electricians. The solar shingles will cost 30 to 40 percent less than other solar-embedded building materials and 10 percent less than the combined costs of conventional roofing materials and rack-mounted solar panels, according to company officials.

Dow isn't the first company to incorporate solar cells into building materials. In recent years, a number of leading solar manufacturers have launched small lines of solar shingles, tiles, and window glazes. But as Dow looks to bring its shingles mainstream, other solar manufacturers are backing away from the products. Suntech Power, the Chinese solar maker, and the largest crystalline silicon photovoltaic manufacturer in the world, has several integrated solar systems on the market, but with the recent downturn in new housing construction, the company has focused instead on ramping up conventional photovoltaic panel output, says Jeffrey Shubert, Suntech Power marketing director for North and South America.

According to analyst Johanna Schmidtke of Boston-based Lux Research, building integrated solar installations are, despite manufacturers' claims, still significantly more expensive than conventional rack-mounted solar arrays due to increased costs associated with manufacturing and installation. The devices currently occupy niche markets for those willing to pay a premium for the aesthetic value of the less-obtrusive integrated systems.

Companies looking to develop solar shingles and other solar-integrated building materials have also had to overcome significant design and materials challenges. "Putting solar panels directly into the roof or skin of a building requires a product that has structural integrity, weathering ability, and electrical integrity," says Mark Farber a senior consultant with Photon Consulting in Boston. "It has to be a good building material and a good power generator, and achieving both is hard to do."

Plug and play: Dow's Powerhouse Solar Shingles nail in like conventional shingles and interconnect electrically through rigid plugs at the end of each shingle.

To address cost and performance challenges, Dow partnered with solar cell producer Global Solar Energy, one of the early developers of copper, indium, gallium, and selenium (CIGS) thin films. CIGS thin-film semiconductors are less expensive than conventional crystalline silicon solar panels and offer some of the highest conversion efficiencies of emerging thin films.

For each of Dow's shingles, Global Solar will manufacture strings of five interconnected solar cells. Dow will then encapsulate each string with glass and polymers and embed it into a shingle with electrical plugs at each end that link the individual shingle into a larger array.

Dow is leveraging its ties within the building materials and construction industries to develop, test, and distribute its shingles. Installations can be completed in half the time of conventional solar installations, and an electrician is only needed to make the final connection to the building's electrical system, according to David Parrillo, senior research and development director of Dow Solar Solutions.

The DOE also awarded United Solar Ovonic of Rochester Hills, MI, $13.3 million in tax credits to ramp up production and increase the efficiency of its building integrated photovoltaic materials. Unlike Dow, the company produces amorphous silicon thin films that are encapsulated entirely in polymers. Amorphous silicon offers lower efficiencies--currently 6.5 to 7 percent at the array level--than the CIGS shingles that Dow is developing. Silicon, however, is a less expensive material than CIGS and is less susceptible to moisture. As a result, the integrated solar cells built by United Solar Ovonic don't require glass covers like Dow's shingles, allowing them greater flexibility.

By Phil McKenna