Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........


Better Batteries: New Technology Improves Both Energy Capacity and Charge Rate in Rechargeable Batteries

A team of engineers has created an electrode for lithium-ion batteries -- rechargeable batteries such as those found in cellphones and iPods -- that allows the batteries to hold a charge up to 10 times greater than current technology. Batteries with the new electrode also can charge 10 times faster than current batteries.

 New research could lead to rechargeable lithium-ion batteries that hold a charge up to 10 times greater than current technology and that charge 10 times faster than current batteries.

The researchers combined two chemical engineering approaches to address two major battery limitations -- energy capacity and charge rate -- in one fell swoop. In addition to better batteries for cellphones and iPods, the technology could pave the way for more efficient, smaller batteries for electric cars.

The technology could be seen in the marketplace in the next three to five years, the researchers said.
A paper describing the research is published by the journal Advanced Energy Materials.

"We have found a way to extend a new lithium-ion battery's charge life by 10 times," said Harold H. Kung, lead author of the paper. "Even after 150 charges, which would be one year or more of operation, the battery is still five times more effective than lithium-ion batteries on the market today."

Kung is professor of chemical and biological engineering in the McCormick School of Engineering and Applied Science. He also is a Dorothy Ann and Clarence L. Ver Steeg Distinguished Research Fellow.

Lithium-ion batteries charge through a chemical reaction in which lithium ions are sent between two ends of the battery, the anode and the cathode. As energy in the battery is used, the lithium ions travel from the anode, through the electrolyte, and to the cathode; as the battery is recharged, they travel in the reverse direction.

With current technology, the performance of a lithium-ion battery is limited in two ways. Its energy capacity -- how long a battery can maintain its charge -- is limited by the charge density, or how many lithium ions can be packed into the anode or cathode. Meanwhile, a battery's charge rate -- the speed at which it recharges -- is limited by another factor: the speed at which the lithium ions can make their way from the electrolyte into the anode.

In current rechargeable batteries, the anode -- made of layer upon layer of carbon-based graphene sheets -- can only accommodate one lithium atom for every six carbon atoms. To increase energy capacity, scientists have previously experimented with replacing the carbon with silicon, as silicon can accommodate much more lithium: four lithium atoms for every silicon atom. However, silicon expands and contracts dramatically in the charging process, causing fragmentation and losing its charge capacity rapidly.

Currently, the speed of a battery's charge rate is hindered by the shape of the graphene sheets: they are extremely thin -- just one carbon atom thick -- but by comparison, very long. During the charging process, a lithium ion must travel all the way to the outer edges of the graphene sheet before entering and coming to rest between the sheets. And because it takes so long for lithium to travel to the middle of the graphene sheet, a sort of ionic traffic jam occurs around the edges of the material.

Now, Kung's research team has combined two techniques to combat both these problems. First, to stabilize the silicon in order to maintain maximum charge capacity, they sandwiched clusters of silicon between the graphene sheets. This allowed for a greater number of lithium atoms in the electrode while utilizing the flexibility of graphene sheets to accommodate the volume changes of silicon during use.

"Now we almost have the best of both worlds," Kung said. "We have much higher energy density because of the silicon, and the sandwiching reduces the capacity loss caused by the silicon expanding and contracting. Even if the silicon clusters break up, the silicon won't be lost."

Kung's team also used a chemical oxidation process to create miniscule holes (10 to 20 nanometers) in the graphene sheets -- termed "in-plane defects" -- so the lithium ions would have a "shortcut" into the anode and be stored there by reaction with silicon. This reduced the time it takes the battery to recharge by up to 10 times.

This research was all focused on the anode; next, the researchers will begin studying changes in the cathode that could further increase effectiveness of the batteries. They also will look into developing an electrolyte system that will allow the battery to automatically and reversibly shut off at high temperatures -- a safety mechanism that could prove vital in electric car applications.

The Energy Frontier Research Center program of the U.S. Department of Energy, Basic Energy Sciences, supported the research.

The paper is titled "In-Plane Vacancy-Enabled High-Power Si-Graphene Composite Electrode for Lithium-Ion Batteries." Other authors of the paper are Xin Zhao, Cary M. Hayner and Mayfair C. Kung, all from Northwestern.

From sciencedaily

World's Lightest Material Is a Metal 100 Times Lighter Than Styrofoam

Their findings appear in the Nov. 18 issue of Science.
The new material redefines the limits of lightweight materials because of its unique "micro-lattice" cellular architecture. The researchers were able to make a material that consists of 99.99 percent air by designing the 0.01 percent solid at the nanometer, micron and millimeter scales. "The trick is to fabricate a lattice of interconnected hollow tubes with a wall thickness 1,000 times thinner than a human hair," said lead author Dr. Tobias Schaedler of HRL.

 New metal - which is 99.9 percent air - is so light that it can sit atop dandelion fluff without damaging it.

The material's architecture allows unprecedented mechanical behavior for a metal, including complete recovery from compression exceeding 50 percent strain and extraordinarily high energy absorption.

"Materials actually get stronger as the dimensions are reduced to the nanoscale," explained UCI mechanical and aerospace engineer Lorenzo Valdevit, UCI's principal investigator on the project. "Combine this with the possibility of tailoring the architecture of the micro-lattice and you have a unique cellular material."

Developed for the Defense Advanced Research Projects Agency, the novel material could be used for battery electrodes and acoustic, vibration or shock energy absorption.

William Carter, manager of the architected materials group at HRL, compared the new material to larger, more familiar edifices: "Modern buildings, exemplified by the Eiffel Tower or the Golden Gate Bridge, are incredibly light and weight-efficient by virtue of their architecture. We are revolutionizing lightweight materials by bringing this concept to the nano and micro scales."

From sciencedaily

New Lightning-Fast, Efficient Nanoscale Data Transmission

Stanford's Jelena Vuckovic, an associate professor of electrical engineering, and Gary Shambat, a doctoral candidate in electrical engineering, announced their device in a research paper in the journal Nature Communications.

 This carrier holds a single chip containing hundreds of the Stanford low-power LEDs integrated together.

Vuckovic had earlier this year produced a nanoscale laser that was similarly efficient and fast, but that device operated only at temperatures below 150 degrees Kelvin, about minus-190 degrees Fahrenheit, making it impractical for commercial use. The new device operates at room temperature and could, therefore, represent an important step toward next-generation computer chips.

"Low-power, electrically controlled light sources are vital for next-generation optical systems to meet the growing energy demands of the computer industry," said Vuckovic. "This moves us in that direction significantly."

Single-mode light
The LED in question is a "single-mode LED," a special type of diode that emits light more or less at a single wavelength, similarly to a laser.

"Traditionally, engineers have thought only lasers can communicate at high data rates and ultralow power," said Shambat. "Our nanophotonic, single-mode LED can perform all the same tasks as lasers, but at much lower power."

Nanophotonics is key to the technology. In the heart of their device, the engineers have inserted little islands of the light-emitting material indium arsenide, which, when pulsed with electricity, produce light. These "quantum dots" are surrounded by photonic crystal -- an array of tiny holes etched in a semiconductor. The photonic crystal serves as a mirror that bounces the light toward the center of the device, confining it inside the LED and forcing it to resonate at a single frequency.
"In other words, it becomes single-mode," said Shambat.

"Without these nanophotonic ingredients -- the quantum dots and the photonic crystal -- it is impossible to make an LED efficient, single-mode and fast all at the same time," said Vuckovic.

Engineering ingenuity
The new device includes a bit of engineering ingenuity, too. Existing devices are actually two devices, a laser coupled with an external modulator. Both devices require electricity. Vuckovic's diode combines light transmission and modulation functions into one device, drastically reducing energy consumption.
In tech-speak, the new LED device transmits data, on average, at 0.25 femto-joules per bit of data. By comparison, today's typical "low" power laser device requires about 500 femto-joules to transmit the same bit.

"Our device is some 2,000 times more energy efficient than best devices in use today," said Vuckovic.
Stanford Professor James S. Harris, former PhD student Bryan Ellis and doctoral candidates Arka Majumdar, Jan Petykiewicz and Tomas Sarmiento also contributed to this research.

From sciencedaily

An Ultrathin Brain Implant Monitors Seizures

A new, ultrathin, ultraflexible implant loaded with sensors can record the electrical storm that erupts in the brain during a seizure with nearly 50-fold greater resolution than was previously possible. The level of detail could revolutionize epilepsy treatment by allowing for less invasive procedures to detect and treat seizures. It could also lead to a deeper understanding of brain function and result in brain-computer interfaces with unprecedented capacity.

 Brain map: An ultrathin array of electrodes, shown at top being inserted into the brain of a cat, allows for data acquisition far greater than ever before possible. At bottom, the electrode array is so flexible that it can fold around even the slimmest objects, allowing for easy insertion and good coverage of uneven surfaces.

For epilepsy patients who don't respond to medication, neurologists will often try to map where in the brain the seizure originated so that region can be surgically removed. The doctor removes a section of skull and places a bulky sensor array on the surface of the patient's frontal cortex. 

"These clinical devices haven't changed much since the '50s or '60s," says Brian Litt, an epilepsy specialist and bioengineer at the University of Pennsylvania and one of the scientists who led the new research. Because the device has to accommodate wires for each electrode, it only has space for fewer than 100 electrodes and gives a poor resolution picture of the electrical activity. "It's like trying to understand what's going on in a crowd in Manhattan with a single microphone suspended from a helicopter," Litt says. 

 Out of control: An epileptic seizure in a cat, as measured by the new electrode-dense implant, shows a never-before-seen spiral wave of electrical activity.

Current technology has stalled out at a sensor array with about eight sensors per square centimeter; the new array—built in collaboration with John Rogers, a professor of materials science and engineering at the University of Illinois Urbana-Champaign—can fit 360 sensors in the same amount of space. To create a small device so densely packed with sensors, Rogers integrated electronics and silicon transistors into the array itself, drastically reducing the amount of wiring.

"This is more like an array of 360 microphones, lowered closer to the surface and recorded from much smaller regions: a couple of people at the street corner, a couple by the mailbox," Litt says. "This new technique could be the key to understanding functional networks in the brain, and could even be the key to treating and potentially curing some diseases."

In their first test of the device, on a cat with epilepsy, Litt, Rogers, and graduate student Jonathan Viventi (now an assistant professor studying translational neuroengineering at New York University),  saw something striking: a storm of activity that looked like a self-propagating spiral wave. The pattern, only apparent with incredibly high-resolution recording, is remarkably similar to one seen in cardiac muscle during a life-threatening condition called ventricular fibrillation. 

Rather than large sections of the brain being responsible for seizures, something Litt says has traditionally been thought to occur, it appears to instead stem from multiple clusters of very small areas, or "microdomains," in the cortex. The research was published online last week in Nature Neuroscience.

"This is absolutely terrific. I was astounded by the technical accomplishment, and the very strong and important results," says Gerwin Schalk, a brain-computer interface researcher at the Wadsworth Center in Albany, New York. Schalk was not involved in the research. "It will be of tremendous value for basic neuroscience and for translational research." Schalk notes that if the technology proves itself in humans, it could open up substantial opportunities for everything from diagnostics to brain-computer interface devices.
The device could also enable less-invasive testing and treatment. Rather than cutting open a large section of skull to place a monitoring device, Litt says, the new implant could allow surgeons to drill just a small hole through which to slip the slim, rolled-up sensor array, and unfurl it onto the brain's surface once it's inside. And instead of removing areas of brain the size of a golf ball, it might be possible to just remove the microdomains and leave the rest of the cortex intact. 

The current version of the device is one square centimeter; for human use, researchers need to expand it to about eight square centimeters. A startup called MC10 will work on making it larger and production-ready.
Litt and Rogers are now working to create an implant with stimulators embedded next to the sensors. If they can build a device that not only detects the onset of a seizure but can just as quickly provide electrical stimulation to quash it, the research could have great clinical impact. "This isn't just a research tool. It has a clearly defined mode of use in the clinical setting," Rogers says. "This is a piece of biointegrated electronics that is unmatched in its functionality, and the proof is in the pudding." 

By Lauren Gravitz
From Technology Review

Google and Microsoft Talk Artificial Intelligence

Google and Microsoft don't share a stage often, being increasingly fierce competitors in areas such as Web search, mobile, and cloud computing. But the rivals can agree on some things—like the importance of artificial intelligence to the future of technology.

 Meeting of the minds: Peter Norvig (top) and Eric Horvitz agree that AI is a key to the future of technology.

Peter Norvig, Google's director of research, and Eric Horvitz, a distinguished scientist at Microsoft Research, recently spoke jointly to an audience at the Computer History Museum in Palo Alto, California, about the promise of AI. Afterward, the pair talked with Technology Review's IT editor, Tom Simonite, about what AI can do today, and what they think it'll be capable of tomorrow. Artificial intelligence is a complex subject, and some answers have been edited for brevity.

Technology Review: You both spoke on stage of how AI has been advanced in recent years through the use of machine-learning techniques that take in large volumes of data and figure out things like how to translate text or transcribe speech. What about the areas we want AI to help where there isn't lots of data to learn from?

Peter Norvig: What we're doing is like looking under the lamppost for your dropped keys because the light is there. We did really well with text and speech because there's lots of data in the wild. Parsing [breaking down the grammatical elements of sentences] never naturally occurs, perhaps in someone's linguistics homework, so we have to learn that without [labeled] data. One of my colleagues is trying to get around that by looking at which parts of online text have been made links—that can signal where a particular part of a sentence is.

Eric Horvitz: I've often thought that if you had a cloud service in the sky that recorded every speech request and what happened next—every conversation in every taxi in Beijing, for example—it could be possible to have AI learn how to do everything.
More seriously, if we can find ways to capture lots of data in a way that preserves privacy, we could make that possible.

Isn't it difficult to use machine learning if the training data isn't already labeled and explained, to give the AI a "truth" to get started from?

Horvitz: You don't need it to be completely labeled. An area known as semi-supervised learning is showing us that even if 1 percent or less of the data is tagged, you can use that to understand the rest.
But a lack of labels is a challenge. One solution is to actually pay people a small amount to help out a system with data it can't understand, by doing microtasks like labeling images or other small things. I think using human computation to augment AI is a really rich area.

Another possibility is to build systems that understand the value of information, meaning they can automatically compute what the next best question to ask is, or how to get the most value out of an additional tag or piece of information provided by a human.

Norvig: You don't have to tell a learning system everything. There's a type of learning called reinforcement learning where you just give a reward or punishment at the end of a task. For example, you lost a game of checkers and aren't told where you went wrong and have to learn what to do to get the reward next time. 

All this is very different from the early days of artificial intelligence, in the '50s and '60s, when researchers made bold predictions about matching human ability and tried to use high-level rules to create intelligence. Are your machine-learning systems working out those same high-level rules for themselves?

Horvitz: Learning systems can derive high-level situational rules for action, for example, to take a set of [physiological] symptoms and test results and spit out a diagnosis. But that isn't the same as general rules of intelligence.

It may be that the more low-level work we do today will meet the top-down ideas from the bottom up one day. The revolution that Peter and I were part of in AI was that decision making under uncertainty was so important and could be done with probabilistic approaches. Along with the probabilistic revolution in the AI comes perspective: we are very limited agents and incompleteness is inescapable.

Norvig: In the early days, there was logic that set artificial intelligence apart, and the question was how to use it. The study became the study of what these tools were good for, like chess. But you can then only have things that are true or false and you can't do a lot of things we want to do, so we went toward probability. It took the field a while to recognize those other fields, like probability and decision theory, were out there. Bringing those two approaches together is a challenge.

As we see more direct evidence of AI in real life, for example, Siri, it seems that a kind of design problem has been created. People creating AIs need to make them palatable to our own intelligence.

Norvig: That's actually a set of problems at various levels. We know the human vision system and what making buttons different colors might mean, for example. At a higher level, the expectations in our head of something and how it should behave are based on what we think it is and how we think of its relationship to us.

Horvitz: AI is intersecting more and more with the field of computer human interaction [studying the psychology of how we use and think about computers]. The idea that we will have more intelligent things that work closely with people really focuses attention on the need to develop new methods at the intersection of human intelligence and machine intelligence.

What do we need to know more about to make AIs more compatible with humans?
Horvitz: One thing my research group has been pushing to give computers is a systemwide understanding of human attention, to know when best to interrupt a person. It's been a topic of research between us researchers and the product teams.

Norvig: I think we also want to understand the human body a lot more, and you can see in Microsoft's Kinect a way to do that. There's lots of potential to have systems understand our behavior and body language.

Is there any AI in Kinect?
Horvitz: There's quite a lot of machine learning at the core of it. I think the idea that we can take leading-edge AI and develop a consumer device that sold faster than any other before in history says something about the field of AI. Machine learning also plays a central role in Bing search, and I can only presume is also important in Google's search offering. So, people searching the Web use AI in their daily lives.

One last question: Can you tell me one recent demo of AI technology that impressed you?

Norvig: I read a paper recently by someone at Google about to go back to Stanford about unsupervised learning, an area where the curves of our improvement over time have not looked so good. But he's getting some really good results, and it looks like learning when you don't know anything in advance could be about to get a lot better.

Horvitz: I've been very impressed by apprentice learning, where a system learns by example. It has lots of applications. Berkeley and Stanford both have groups really advancing that: for example, helicopters that learn to fly on their backs [upside-down] from [observing] a human expert.

By Tom Simonite
From Technology Review

Startup to Capture Lithium from Geothermal Plants

As portable electronics get more popular and the market for electric vehicles takes off, demand for lithium—a critical element in rechargeable lithium-ion batteries—could soar. Yet just two countries, Chile and Australia, dominate global lithium production.

 Brine time: A Simbol Materials engineer works on equipment used to separate lithium, manganese, and zinc from geothermal brine.

 California startup Simbol Materials thinks it can increase domestic production of lithium by extracting the element, along with manganese and zinc, from the brine used by geothermal plants.

In the late 1990s, the U.S. produced 75 percent of the world's lithium carbonate, but now it makes only 5 percent. This is, in part, because U.S. manufacturers couldn't compete with low-cost lithium chemicals from Chile. The U.S. produces no manganese at all. "Yet we have this resource, already being harnessed for geothermal power production," says Luka Erceg, Simbol's CEO. "This is an enormous opportunity to harvest clean renewable energy and produce critical materials in a sustainable manner." 

Worldwide demand for lithium chemicals was about 102,000 tons in 2010. This is expected to go up to as much as 320,000 by 2020, mostly because of increased electric-vehicle use. The world's largest lithium resources are estimated by the U.S. Geological Survey to be in Bolivia. Most manufacturers, including the world's largest, in Chile, typically make the material by pumping brine into pools to evaporate in the sun for 18 to 24 months. This process leaves behind a concentrated lithium chloride that's converted into lithium carbonate. The only U.S. producer, Chemetall Foote, drills for brine at Silver Peak in Nevada. 

Simbol plans to piggyback on a 50-megawatt geothermal plant near the Salton Sea in Imperial Valley, California, that pumps hot brine from deep underground to generate steam to drive a turbine. The plant currently injects the brine, which contains 30 percent dissolved solids, including lithium, manganese, and zinc, back into the ground after the steam is produced. Simbol will divert the brine from the power plant, before reinjection, into its processing equipment. There, the still-warm brine will flow through a proprietary medium that filters out the salts within hours. Simbol has also acquired the assets and intellectual property from a now-defunct Canadian company for a purification process that creates the world's highest-purity lithium carbonate. Erceg expects to compete with the lowest-cost Chilean producers, which produce lithium at $1,500 a ton.

Simbol currently runs a pilot plant that filters 20 gallons a minute. The commercial plant, near Salton Sea, will begin construction in 2012 and will have the capacity to produce 16,000 tons of lithium carbonate annually. The world's third-largest producer, by comparison, makes 22,000 tons. By 2020, Simbol plans to triple production by expanding to more geothermal plants, Erceg says. But for now, it is buying low-grade lithium carbonate from other manufacturers for purification, and it expects to sell the high-purity product overseas before the end of this year. 

Other lithium-mining projects are planned or underway around the world, including two more in Nevada. Keith Evans, a geologist and industrial minerals expert, says that if they all come online, global production in 2020 could be over 426,000 tons, far outstripping demand. Nevertheless, more U.S. production could make the country self-sufficient. Plus, he says, Simbol could have an advantage over other U.S. companies. "If their process is as good as they say it is, it could be a very-low-cost producer," Evans says. "It is potentially a very exciting project, if it works."

By Prachi Patel
From Technology Review

Researchers Engineer a Mightier Mouse

Mice that grow larger muscles and can run for twice as long as their unaltered littermates before tiring could point toward new treatments for the muscle loss that can occur with aging.   

 Svelte mice: Animals that lack a molecule called NCOR in their fat cells (bottom) show fewer signs of inflammation (light blue) than their normal counterparts (top).

The mice were engineered to lack a molecule called NCOR in their muscle tissue. In a second, related study, knocking out the same molecule in fat resulted in mice that were overweight but sensitive to insulin, a result that could lead to more targeted treatments for diabetes. Both studies were published in the journal Cell last week.

NCOR acts as a dimmer switch for other molecules in a cell. It is known as a corepressor, slowing the production of transcription factors, which in turn regulate the expression of genes. Dimmer-switch molecules are often good drug targets thanks to this subtle effect, says Johan Auwerx, a researcher at École Polytechnique Fédérale de Lausanne, who led the first study, which involved knocking out NCOR in muscle. "That's better from a medical standpoint, because you don't want to turn a molecule all on or off," he says.
Because NCOR acts on different molecules in many parts of the body, Auwerx and others have been using genetic techniques to create mice that lack the protein in only certain types of tissue. Knocking out the molecule in all tissues from birth is lethal.

According to the second study, eliminating the molecule in fat had a very specific effect: fat cells became more sensitive to insulin, as did cells in the muscle and the liver. Insulin resistance is one of the hallmarks of metabolic syndrome, a precursor to type 2 diabetes, so the findings could inform drug development for the disease.

"The results suggest that adipose is the organizing tissue for metabolic syndrome," says Jerrold Olefsky, a researcher at the University of California, San Diego, who led the second study. "If you can treat it, you get systemic effects on other tissues."

On a molecular level, knocking out NCOR appeared to mimic the effect of a class of diabetes drugs known as thiazolidinediones, or TZDs. These drugs target the same molecule as NCOR, but have significant side effects, including hepatitis, liver failure, water retention, and heart failure. Some have been pulled from the market.

The researchers did not see any of these ill effects in the mice, suggesting that if you target treatment to adipose tissue, "you get rid of unwanted side effects," says Olefsky. "Targeting NCOR is better because it has a much more selective role."

Olefsky's team also identified more than 100 genes that are activated by deleting NCOR in fat. They're now studying these genes as potential drug targets.

The mice that lacked NCOR in their muscles had a different outcome—their muscles had many more mitochondria, the fuel source of the cell, which allowed them to run longer. "That means better capacity to keep energy levels up," says Auwerx. The researchers knocked out the same gene in the muscle tissue of worms, which also grew larger muscles, suggesting that the same trick should work in other animals.

Auwerx is now looking for drugs that can modulate NCOR levels. Fasting brings levels down, while glucose pushes it up. The results could be useful in treating aging-related muscle loss, which occurs even in old people who exercise, as well as diseases such as muscular dystrophy.

By Emily Singer
From Technology Review

Tiny USB Stick Brings Android to PCs, TVs

Google has made no secret about its plans for Android. Smartphones and tablets are just the beginning — the company wants Android everywhere. And thanks to FXI Technologies’ Cotton Candy USB device, we may not have to wait long to see Android on more than just our mobile devices. 

 The Cotton Candy USB computing device by FXI Technologies is a tiny, pocket-sized computer.

FXI essentially built an ultra-lean computer inside a small USB stick. Stick it into any device that supports USB storage, and Cotton Candy will register as a USB drive. From there, you can run the Android OS in a secure environment inside your desktop, courtesy of a Windows/OSX/Linux-compatible virtualization client embedded in the device. 

Stick Cotton Candy into a computer, and Android will appear in a virtualized window on your desktop. But get this: The USB key also features an HDMI connector. This way, you can connect the stick to your TV and use Android on the big screen (though you’ll need some kind of secondary input device, like a Bluetooth mouse/keyboard combo, to get anything done.)

Cotton Candy is far more than just Android on a stick. Under its Hot Wheels-sized hood, the device sports a 1.2GHz ARM Cortex A9-based processor (the same basic processor architecture you’ll find in the fastest chips from Apple and Nvidia), as well as ARM’s quad-core Mali GPU and 1GB of RAM. It’s an impressive laundry list of specs, and seems more than capable of fueling Android 2.3, aka Gingerbread, the version of the OS that comes on the device. 

From TVs to car stereo head units to refrigerators and lighting fixtures, it seems no piece of consumer electronics is out of Android’s reach. And, ultimately, getting Android on as many devices as possible gets Google’s search bar and services onto multitudinous screens beyond the desktop environment. This potentially means more ads served, and more revenue for the search company’s core business. 

Android has already appeared is a small number of refrigerators, TVs and automobiles, and if widely adopted in the greater gadgetsphere, app-makers could make better, appliance-specific Android apps.

For now, the Cotton Candy USB stick is a stopgap item — a small taste of what Android can be before it bursts outside its mobile boundaries. Unfortunately, since it’s not a proper Android device per se — i.e., it doesn’t comply with enough of Google’s requirements to be considered “official” — you’ll be unable to access the Android Market from the device. Sideloading is still an option, though, so you won’t be left completely app-less. 

Expect Cotton Candy to pop up for less than $200 around mid-2012. 

By Mike Isaac 
From wired

EEG Detects Signs of Awareness in Vegetative Patients

Three brain injury patients diagnosed as being in a vegetative state—meaning they do not respond to their environment—may actually be conscious. Using EEG (electroencephalography) to measure their brain activity, researchers found that the patients could follow simple commands.

 Mind reading: Using EEG to measure brain activity, researchers found that some patients diagnosed as being in a vegetative state could respond to simple commands. The pattern of electrical activity in these patients (one example is shown above) is identical to patterns seen in healthy people.


This supports previous findings from the same group suggesting that some people who appear outwardly unresponsive may have a relatively high level of cognitive capacity. Researchers aim to ultimately develop the approach into a communication tool.

In the study, researchers examined 16 patients with brain injury—some due to traumatic injury and others due to lack of oxygen—and 12 healthy people, asking both groups to imagine moving either their hands or toes while wearing an EEG monitor. They found that, like the healthy people, three of the brain injury patients could reliably generate two distinct brain activity patterns based on the command. One patient did it more than 200 times, which is even more than the healthy participants managed.

The team had previously used functional MRI, or brain imaging, to show that a patient diagnosed as being in a vegetative state could use a similar system to answer yes or no questions. That startling finding rocked the medical world, begging the question of how many of these patients had cognitive function beyond what their outward function indicated.

MRI machines are, however, expensive and largely limited to hospitals, making them a difficult tool to study brain injury patients, who are often in rehabilitation or nursing homes. In the new study, researchers used a standard EEG device, which is relatively inexpensive and highly portable. "It's probably about as sensitive as MRI," says Adrian Owen, a researcher at the University of Western Ontario, who led the study. "That means we have something we can get out into the community and use in hospitals or residential homes."

The researchers can detect when someone is thinking about moving a hand versus a toe because the brain activity originates in a different part of the motor cortex, the part of the brain that controls movement. Owen's team spent much of the last year working out how to accurately decode the electrical signals the brain emits when imaging these movements. The findings of the new study were published this week in The Lancet

The three patients who could respond via EEG did not share any obvious features; they varied in age, in time since the original injury, and in the type of injury suffered. Owen's team is now using high-resolution functional MRI machines to study these patients' brains in fine detail in hopes of finding some commonality. "Anything we can do to improve our understanding or to learn more about catastrophic brain injuries can help us understand what's going on," says Owen.

They hope to eventually use the EEG setup to ask patients questions, which had been possible with functional MRI. At the moment, researchers can't read the EEG response in real time, making interaction very difficult. "Our priority now is trying to speed it up; then we'll move on to communication," he says. 
What exactly the new findings indicate about the patients' level of consciousness is still controversial. "I think they were entirely aware and conscious of what's going on," says Owen. "For them to do this, they have to have understood the instructions we gave them, to have sustained attention, to keep on task, and to respond. These are all things we associated with consciousness."

Morten Storm Overgaard, head of the Cognitive Neuroscience Research Unit at Aalborg University in Denmark, disagrees. "I think their study is very interesting, but it's hard to argue that there is a link between command-following and consciousness. And there's no independent way of making sure," says Overgaard, who wrote a commentary accompanying the paper. Overgaard does agree, however, that someone who can reliably answer questions via brain activity is likely conscious.

Both Overgaard and Owen say a new classification system is required to accurately reflect the state these patients are in. "While they do meet all the clinical criteria for the vegetative state, we know they are not actually vegetative," says Owen. One suggestion that has yet to catch on is "behavioral unresponsiveness syndrome."

By Emily Singer
From Technology Review

World’s largest digital camera gets green light

A U.S. Department of Energy review panel last week gave a glowing endorsement for the Stanford Linear Accelerator Center (SLAC)-led project to create the world’s largest digital camera, which will enable a new telescope being built on a Chilean mountaintop to investigate key astronomical questions ranging from dark matter and dark energy to near-Earth asteroids.

Once constructed, the Large Synoptic Survey Telescope's 3.2-billion-pixel camera will be the largest digital camera in the world. Roughly the size of a small car, the camera will take 800 panoramic images each night, surveying the entire southern sky twice a week. These images will enable researchers to create a 3-D map of the universe with unprecedented depth and detail, and could shed light on the fundamental properties of dark energy and dark matter.


After two and a half days of presentations and meetings at SLAC, the panel of 19 experts recommended that the 3.2 gigapixel (billion pixel) camera for the Large Synoptic Survey Telescope receive Critical Decision-1 status, the DOE’s project-management milestone that defines a large project’s approach and funding for achieving a specific scientific mission.

The camera, which will be built at SLAC, is expected to cost about one third of the nearly $500 million price tag for the new telescope, which is being borne by the DOE and the National Science Foundation, as well as several public and private organizations in the United States and abroad.

"The LSST Camera Project team is experienced and has demonstrated a good working relationship," said Kurt Fisher, DOE/SC Review Chairperson from DOE's Office of Project Assessment. "The initial, plenary presentations were impressive, and the team was well-prepared for the review."

Actual CD-1 status will not be officially conferred until higher DOE management reviews the panel’s report, but the experts’ positive comments had LSST managers very optimistic. “Congratulations! You’ve reached a very important milestone on the road to becoming real,” said Fred Borcherding, DOE’s LSST program manager, after the review panel had presented their findings and recommendations at the review’s final session.

“This was an incredibly professional review,” said Steven Kahn, lead scientist on SLAC’s camera project and deputy director of the overall LSST effort. “We learned a lot and will take the panel’s recommendations very seriously as we move forward.”  More than 100 people (about 30 full-time equivalents) from four DOE laboratories, nine universities and one foreign organization are working on the camera project.

The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the solar system, exploring the transient optical sky and mapping the Milky Way.

Sporting an 8.4-m diameter primary mirror, the LSST will be a large, wide-field ground-based telescope designed to provide time-lapse 3-D maps of the universe with unprecedented depth and detail. Of particular interest for cosmology and fundamental physics, these maps can be used to locate the mysterious dark matter, which many scientists think constitutes more than 80% of all matter in the universe, and to characterize the properties of the even more mysterious dark energy, which is driving the accelerating expansion of the universe.

The LSST will also create a detailed map of the Milky Way and a comprehensive census of our solar system and open a movie-like window on objects that change or move rapidly: exploding supernovae, potentially hazardous near-Earth asteroids and distant Kuiper Belt Objects. A new telescope is needed because no existing space- or ground-based instrument has, or can be economically modified to provide, the capabilities that LSST’s science mission requires.

“The speed at which a telescope can survey the sky is proportional to both the size of its mirror and the field of view of its camera,” said Nadine Kurita, SLAC Camera Project manager. “While a telescope in space can take pictures at a finer level of detail because it's not looking through a turbulent atmosphere, it would necessarily be much smaller than LSST, and thus could not possibly provide such an extensive survey.”

Each night, the LSST will take more than 800 wide-field 15-second exposures, each covering 49 times more sky area than the moon. It will photograph the entire visible sky twice a week. Although it will weigh 650 tons (including 60 tons of optical components), the LSST will be nimble enough to move between its image-aiming points in just five seconds.

Since 2001, the LSST has been ranked highly by a dozen national advisory committees, most recently last year’s National Academy of Sciences/National Research Council’s “New Worlds, New Horizons” decadal review, which said it was the highest-priority large ground-based telescope for the coming decade.

In 2003, the non-profit LSST Corporation was set up in Tucson, Ariz., to raise private and agency funding and to manage the collaboration, which now includes 35 institutions, universities and national labs from around the world.

DOE approved the camera’s mission need (CD-0) in June. Future milestones include CD-2 (baseline design approved), CD-3 (construction start), and CD-4 (construction finished; full operation begins).

While SLAC is the lead organization for the LSST camera, the National Optical Astronomy Observatory will provide the telescope and site team, the National Center for Supercomputing Applications will construct and test the archive and data access center, and the Association of Universities for Research in Astronomy is responsible for overseeing the LSST construction. Work on the telescope and its site atop Cerro Pachón in northern Chile is already under way.

At the heart of the 3.2-gigapixel LSST camera—and its most critical components—are its 189 sensors, light-sensitive semiconductor chips far more sophisticated than those used in commercial digital cameras.

“We’re going to be looking at light that’s 100 million times fainter than the human eye can see,” said Paul O’Connor, the Brookhaven National Laboratory scientist  in charge of the camera’s sensor subsystem. “The LSST telescope has enough resolving power to distinguish the images of two stars separated by the equivalent of a pair of car headlights seen at a distance of 400 miles. We designed our charge-coupled device (CCD) chips to record these images with unparalleled clarity while using the minimum silicon area, cost and power.”

The LSST sensors are  designed to respond to a range of light—ultraviolet, visible and infrared—that is much broader than commercial CCDs, and must also be extremely flat so the entire image will be in perfect focus. Moreover, to move all the image data off the chip in just two seconds—rather than the several minutes typical of astronomical images—every sensor is divided into 16 data sectors, each with its own output channel. Each night the LSST will produce more than 15 TB of raw astronomical data. Over its 10-year operating lifetime, the LSST will produce the world’s largest public data set: a 22-petabyte database catalog and a 100-PB image archive.

LSST has been designed as a public facility from the beginning, with deep color imaging and multi-dimensional data products made available quickly over the Internet. Supercomputers will continuously transform LSST imaging data into a revolutionary four-dimensional space-time landscape of color and motion, offering exciting possibilities for exploration and discovery by curious minds of all ages. Anyone with a computer will be able to fly through the universe, zooming past objects a hundred million times fainter than can be observed with the unaided eye.

LSST’s results may well be much more significant than just learning about the cosmos, O’Connor said:“I think finding the source of dark energy may have long term impacts with the potential to transform society—just like the fundamental discoveries in electromagnetism and atomic physics have done in producing the technologies we take for granted today.”

By Mike Ross
From rdmag

No Need to Shrink Guts to Have a Larger Brain

Brain tissue is a major consumer of energy in the body. If an animal species evolves a larger brain than its ancestors, the increased need for energy can be met by either obtaining additional sources of food or by a trade-off with other functions in the body. In humans, the brain is three times larger and thus requires a lot more energy than that of our closest relatives, the great apes. Until now, the generally accepted theory for this condition was that early humans were able to redirect energy to their brains thanks to a reduced digestive tract. Zurich primatologists, however, have now disproved this theory, demonstrating that mammals with relatively large brains actually tend to have a somewhat bigger digestive tract. Ana Navarrete, the first author on the study recently published in Nature, has studied hundreds of carcasses from zoos and museums.

 Thanks to communal care for mothers and children, humans can afford both: a huge brain and more frequent offspring.

"The data set contains a hundred species, from the stag to the shrew," explains the PhD student. The scientists involved in the study then compared the size of the brain with the fat-free body mass. Senior author Karin Isler stresses that, "it is extremely important to take an animal's adipose deposits into consideration as, in some species, these constitute up to half of the body mass in autumn." But even compared with fat-free body mass, the size of the brain does not correlate negatively with the mass of other organs.

More fat, smaller brain
Nevertheless, the storage of fat plays a key role in brain size evolution. The researchers discovered another rather surprising correlation: the more fat an animal species can store, the smaller its brain. Although adipose tissue itself does not use much energy, fat animals need a lot of energy to carry extra weight, especially when climbing or running. This energy is then lacking for potential brain expansion. "It seems that large adipose deposits often come at the expense of mental flexibility," says Karin Isler. "We humans are an exception, along with whales and seals -- probably because, like swimming, our bipedalism doesn't require much more energy even when we are a bit heavier."

Interplay of energetic factors
The rapid increase in brain size and the associated increase in energy intake began about two million years ago in the genus Homo. Based on their extensive studies of animals, the Zurich researchers propose a scenario in which several energetic factors are involved: "In order to stabilize the brain's energy supply on a higher level, prehistoric man needed an all-year, high-quality source of food, such as underground tubers or meat. As they no longer climbed every day, they perfected the art of walking upright. Even more important, however, is communal child care," says Karin Isler. Because ape mothers do not receive any help, they can only raise an offspring every five to eight years. Thanks to communal care for mothers and children, humans can afford both: a huge brain and more frequent offspring.

From sciencedaily

Nano Car Has Molecular 4-Wheel Drive: Smallest Electric Car in the World

To carry out mechanical work, one usually turns to engines, which transform chemical, thermal or electrical energy into kinetic energy in order to, say, transport goods from A to B. Nature does the same thing; in cells, so-called motor proteins -- such as kinesin and the muscle protein actin -- carry out this task. Usually they glide along other proteins, similar to a train on rails, and in the process "burn" ATP (adenosine triphosphate), the chemical fuel, so to speak, of the living world.

 Measuring approximately 4x2 nanometres the molecular car is forging ahead on a copper surface on four electrically driven wheels.

A number of chemists aim to use similar principles and concepts to design molecular transport machines, which could then carry out specific tasks on the nano scale. According to an article in the latest edition of science magazine "Nature," scientists at the University of Groningen and at Empa have successfully taken "a decisive step on the road to artificial nano-scale transport systems." They have synthesised a molecule from four rotating motor units, i.e. wheels, which can travel straight ahead in a controlled manner. "To do this, our car needs neither rails nor petrol; it runs on electricity. It must be the smallest electric car in the world -- and it even comes with 4-wheel drive" comments Empa researcher Karl-Heinz Ernst.

Range per tank of fuel: still room for improvement
The downside: the small car, which measures approximately 4x2 nanometres -- about one billion times smaller than a VW Golf -- needs to be refuelled with electricity after every half revolution of the wheels -- via the tip of a scanning tunnelling microscope (STM). Furthermore, due to their molecular design, the wheels can only turn in one direction. "In other words: there's no reverse gear," says Ernst, who is also a professor at the University of Zurich, laconically.

According to its "construction plan" the drive of the complex organic molecule functions as follows: after sublimating it onto a copper surface and positioning an STM tip over it leaving a reasonable gap, Ernst's colleague, Manfred Parschau, applied a voltage of at least 500 mV. Now electrons should "tunnel" through the molecule, thereby triggering reversible structural changes in each of the four motor units. It begins with a cis-trans isomerisation taking place at a double bond, a kind of rearrangement -- in an extremely unfavourable position in spatial terms, though, in which large side groups fight for space. As a result, the two side groups tilt to get past each other and end up back in their energetically more favourable original position -- the wheel has completed a half turn. If all four wheels turn at the same time, the car should travel forwards. At least, according to theory based on the molecular structure.

To drive or not to drive -- a simple question of orientation
And this is what Ernst and Parschau observed: after ten STM stimulations, the molecule had moved six nanometres forwards -- in a more or less straight line. "The deviations from the predicted trajectory result from the fact that it is not at all a trivial matter to stimulate all four motor units at the same time," explains "test driver" Ernst.

Another experiment showed that the molecule really does behave as predicted. A part of the molecule can rotate freely around the central axis, a C-C single bond -- the chassis of the car, so to speak. It can therefore "land" on the copper surface in two different orientations: in the right one, in which all four wheels turn in the same direction, and in the wrong one, in which the rear axle wheels turn forwards but the front ones turn backwards -- upon excitation the car remains at a standstill. Ernst und Parschau were able to observe this, too, with the STM.

Therefore, the researchers have achieved their first objective, a "proof of concept," i.e. they have been able to demonstrate that individual molecules can absorb external electrical energy and transform it into targeted motion. The next step envisioned by Ernst and his colleagues is to develop molecules that can be driven by light, perhaps in the form of UV lasers.

From sciencedaily

Researchers Create a Pituitary Gland from Scratch

Last spring, a research team at Japan's RIKEN Center for Developmental Biology created retina-like structures from cultured mouse embryonic stem cells. This week, the same group reports that it's achieved an even more complicated feat—synthesizing a stem-cell-derived pituitary gland.

 New gland: After 13 days in culture, mouse embryonic stem cells had self-assembled the precursor pouch, shown here, that gives rise to the pituitary gland.

The pituitary gland is a small organ at the base of the brain that produces many important hormones and is a key part of the body's endocrine system. It's especially crucial during early development, so the ability to simulate its formation in the lab could help researchers better understand how these developmental processes work. Disruptions in the pituitary have also been associated with growth disorders, such as gigantism, and vision problems, including blindness. 

The study, published in this week's Nature, moves the medical field even closer to being able to bioengineer complex organs for transplant in humans.

The experiment wouldn't have been possible without a three-dimensional cell culture. The pituitary gland is an independent organ, but it can't develop without chemical signals from the hypothalamus, the brain region that sits just above it. With a three-dimensional culture, the researchers could grow both types of tissue together, allowing the stem cells to self-assemble into a mouse pituitary. "Using this method, we could mimic the early mouse development more smoothly, since the embryo develops in 3-D in vivo," says Yoshiki Sasai, the lead author of the study.

The researchers had a vague sense of the signaling factors needed to form a pituitary gland, but they had to figure out the exact components and sequence through trial and error. The winning combination consisted of two main steps, which required the addition of two growth factors and a drug to stimulate a developmental protein called sonic hedgehog (named after the video game). After about two weeks, the researchers had a structure that resembled a pituitary gland.

Fluorescence staining showed that the cultured pituitary tissue expressed the appropriate biomarkers and secreted the right hormones. The researchers went a step further and tested the functionality of their synthesized organs by transplanting them into mice with pituitary deficits. The transplants were a success, restoring levels of glucocorticoid hormones in the blood and reversing behavioral symptoms, such as lethargy. Mice implanted with stem-cell constructs that hadn't been treated with the right signaling factors, and therefore weren't functional pituitary glands, did not improve.

Next, Sasai and his colleagues will attempt the experiment with human stem cells. Sasai suspects it will take them another three years to synthesize human pituitary tissue. Perfecting the transplantation methods in animals will likely take another few years. 

Still, researchers in the stem-cell field are impressed with what Sasai's team has accomplished. "This is just an initial step toward generating viable, transplantable human organs, but it's both an elegant and illuminating study," says Michael G. Rosenfeld, a neural stem-cell expert at the University of California, San Diego. 

By Erica Westly
From Technology Review

A Super-Absorbent Solar Material

A new nanostructured material that absorbs a broad spectrum of light from any angle could lead to the most efficient thin-film solar cells ever.

Light catcher: This scanning electron microscope image shows the super absorbent nanostructures, which measure 400 nanometers at their base.

Researchers are applying the design to semiconductor materials to make solar cells that they hope will save money on materials costs while still offering high power-conversion efficiency. Initial tests with silicon suggest that this kind of patterning can lead to a fivefold enhancement in absorbance. 

Conventional solar cells are typically a hundred micrometers or more thick. Researchers are working on ways to make thinner solar cells, on the order of hundreds of nanometers thick rather than micrometers, with the same performance, to lower manufacturing costs. However, a thinner solar cell normally absorbs less light, meaning it cannot generate as much electricity.

Some researchers are turning to exotic optical effects that emerge at the nanoscale to solve this conundrum. Harry Atwater, a professor of applied physics and materials science at Caltech and a pioneer of the field, has now come up with a way of patterning materials at the nanoscale that turns them into solar super-absorbers.

Atwater worked with Koray Aydin, now an assistant professor of electrical engineering and computer science at Northwestern University, to develop the super-absorber design, which takes advantage of a phenomenon called optical resonance. Just as a radio antenna will resonate with and absorb certain radio waves, nanostructured optical antennas can resonate with and absorb visible and infrared light. The length of a structure determines what wavelength of light it will resonate with. So Atwater and Aydin designed structures that effectively have many different lengths: wedge shapes with pointy tips and wide bases. The thin, nanoscale wedges strongly absorb blue light at the tip and red light at the base.

Atwater and Aydin demonstrated this broadband effect in a 260-nanometer-thick film made of a layer of silver topped with a thin layer of silicon dioxide and finished with another thin layer of silver carved with arrays of wedges that are 40 nanometers at their tips. Atwater says they chose these materials because they are particularly challenging: in their unpatterned state, they're both highly reflective; but the patterned films can absorb an average of 70 percent of the light across the entire visible spectrum. This work is described in the online journal Nature Communications.

Kylie Catchpole, a research fellow at the Australian National University in Canberra, says the design is promising because it works over a broad band of the spectrum. These effects, Catchpole says, "are usually very sensitive to wavelength." However, she notes, the designs will have to be applied to other materials to work in solar cells.

Aydin and Atwater are now doing just that. The researchers have made a 220-nanometer-thick silicon film that absorbs the same amount of light as an unpatterned film 25 times thicker.

By Katherine Bourzac
From Technology Review

Light-Based Therapy Destroys Cancer Cells

For more than two decades, researchers have tried to develop a light-activated cancer therapy that could replace standard chemotherapy, which is effective but causes serious negative side effects. Despite those efforts, they've struggled to come up with a light-activated approach that would target only cancer cells.

Now scientists at the National Cancer Institute have developed a possible solution that involves pairing cancer-specific antibodies with a heat-sensitive fluorescent dye. The dye is nontoxic on its own, but when it comes into contact with near-infrared light, it heats up and essentially burns a small hole in the cell membrane it has attached to, killing the cell. 

Light touch: Researchers treated the tumor on the right-hand side of this mouse’s body with a light-activated therapy. The top image is before treatment; the bottom is after. 

To target the tumor cells, the researchers used antibodies that bind to proteins that are overexpressed in cancer cells. "Normal cells may have a hundred copies of these antibodies, but cancer cells have millions of copies. That's a big difference," says Hisataka Kobayashi, a molecular imaging researcher at the National Cancer Institute and the lead author of the new study, published this week in Nature Medicine. The result is that only cancer cells are vulnerable to the light-activated cascade.

The researchers tested the new treatment in mice and found that it reduced tumor growth and prolonged survival. 

There are a few kinks to work out before the system can be adapted for humans, though. For instance, the researchers couldn't test the treatment's effect on large tumors, since killing off too many cells at once caused cardiovascular problems in the mice. Finding the right cancer-cell markers to pair with the dye may also prove difficult. For example, HER-2, one of the proteins targeted in the study, is only expressed in 40 percent of breast-cancer cells in humans.

Still, the lack of toxicity associated with the treatment is a huge advantage, says Karen Brewer, a chemist at Virginia Tech who also works on light-activated cancer therapies. "What's interesting about this study is that they're applying a traditional method of targeting cancer cells to a light-activated treatment," she says. "This is really where the field is headed."

The dye used in the study offers another bonus because it lights up—allowing clinicians to track the treatment's progress with fluorescence imaging. In the mice, the fluorescence visibly declined in tumor cells a day after administration of the near-infrared light. Kobayashi suspects the approach could also prove valuable as a secondary therapy by helping surgeons label cancer cells that may remain after a tumor has been excised. "It could help clean up the tumor cells that are harder for surgeons to get to," he says.

By Erica Westly 
From Technology Review

New Method for Making Neurons Could Lead to Parkinson's Treatment

A new method of synthesizing dopamine-producing neurons, the predominant type of brain cell destroyed in Parkinson's, offers hope for creating cell-replacement therapies that reverse the damage.

The method provides an efficient way of making functional cells. When transplanted into mice and rats with brain damage and movement problems similar to Parkinson's, the cells integrated into the brain and worked normally, reversing the animals' motor issues.

 Revamping the brain: Human dopamine-producing cells (marked in red and green) survive and function when transplanted into the brain of rats with brain damage that resembles Parkinson’s disease.

The finding brings researchers a step closer to testing a stem-cell-derived therapy in patients with this disorder. "We finally have a cell that seems to survive and function and a cell source that we can easily scale up," says Lorenz Studer, a researcher at the Sloan Kettering Institute and senior author on the new study. "That makes us optimistic that this could potentially be used in patients in the future."

The research also highlights the challenges of generating cells for tissue-replacement therapy, showing that subtle differences in the way the cells are made can have a huge impact on how well they work once implanted.

Many of the symptoms of Parkinson's disease—which include tremor, muscle rigidity, and loss of balance—are linked to loss of dopamine in the brain. While medications exist to replace some of the lost chemical, they do not alleviate all of the symptoms and can lose their effectiveness over time. Scientists hope that replacing lost cells with new ones will provide a more complete and long-term solution.

In the new study, researchers started with human embryonic stem cells, which by definition can differentiate into any cell type. To make a specific type of cell in high numbers, scientists expose the stem cells to a cocktail of chemicals that mimic what they would experience during normal development.

While stem-cell researchers had previously been able to create dopamine-producing neurons from human stem cells, these cells did little to alleviate movement problems in animals engineered to mimic the symptoms of Parkinson's. In 2009, Studer and others developed a method of making the cells that more closely mimics the way they form during development. The resulting cells also carry more of the molecular markers that characterize dopamine-producing cells in the brain.

In the new research, published Sunday in the journal Nature, Studer's team found a way to make these cells even more efficiently. This is significant in terms of ultimately testing the therapy in humans; many methods for making specific types of cells are complex and yield small amounts of the desired product.

They could scale up the process to make enough material to transplant into monkeys, whose larger brains are more akin to humans' than other animals used in testing. 

In addition, the researchers demonstrated that transplants of the cells could correct Parkinson's-like problems in mice and rats. Three different tests of motor function "all very dramatically improved when you put the cells in," says Studer.

While the two monkeys in the study also had brain damage reminiscent of Parkinson's, not enough time has passed to determine whether the transplants will help, Studer says. It took five months post-transplant for the cells to have a visible effect in rodents. 

The findings demonstrate the challenges of developing treatments based on living cells. "Previously, I think, many people thought of cell therapy [for Parkinson's] as a dopamine-producing biological pump," says Ole Isacson, a neuroscientist at Harvard Medical School. But in reality, it requires a very specific replacement of nerve cells. Unless you have a specific differentiation protocol, you won't get functional recovery in rodent models." Isacson was not involved in the research but has collaborated with Studer on other projects.

Researchers mostly used embryonic stem cells in these experiments, because tissue derived from these cells is already being used in human trials for treating spinal cord injury and certain types of blindness. They also showed that the protocol works on induced pluripotent stem (iPS) cells, which are derived from adult cells that are turned back to an embryonic-like state using a combination of genetic or chemical factors. iPS cells are genetically matched to the cell donor, and might ultimately provide a preferable source of tissue for therapy. However, these cells are further from human testing because they are much less studied than embryonic cells.

Studer's team now plans to make the cells on an even larger scale in a facility that meets conditions set by the U.S. Food and Drug Administration for human therapies. "We need to be able make enough cells to graft 100 patients," says Studer. He predicts that will take a year or two, followed by extensive safety testing to make sure the differentiated cells do not behave in unexpected ways once implanted. 

By Emily Singer
From Technology Review

Company Decodes Cancers to Target Treatment

Just 18 months after its launch, cancer diagnostic startup Foundation Medicine has already developed a clinical diagnostic test, forged partnerships with several pharmaceutical companies, and discovered a number of novel mutations that may point toward new drug treatments for cancer.

Cancer reader: Foundation Medicine has developed a diagnostic test for cancer that reads the genome sequence of hundreds of cancer-linked genes. The results help oncologists pick the best drugs for that patient.

The company is at the forefront of a growing trend in cancer: choosing drugs based on the genetic profile of a patient's tumor cells. The plunging cost of gene sequencing means scientists can read the entire genome of an individual's cancer, leading to the rapid discovery of more and more cancer-linked mutations. Foundation Medicine is putting those findings—and cheap sequencing technology—to work to detect these mutations in newly diagnosed cancers. 

The startup was formed last year by a handful of cancer and genomics experts in Boston, including genomics pioneer Eric Lander, with funding from Boston-based venture capital group Third Rock Ventures. They have since raised $33.5 million from several investors, including Google Ventures. 

While most cancer diagnostics focus on individual genes or specific mutations, Foundation Medicine developed a diagnostic test to read the entire sequence of hundreds of cancer-linked genes. The company has yet to finalize the price of the test, but says it will be similar to the cost of testing five or six individual molecular markers. 

Foundation Medicine has so far processed several thousand tumor samples provided by academic medical centers, pharmaceutical companies, and clinical oncologists. The analysis detects whether the individual has mutations tied to existing drugs—both drugs that are approved for the patient's specific cancer, and those that are approved for other conditions. The test, which takes about two weeks, will also highlight whether a patient has mutations that make him a candidate for experimental drugs in clinical trials. While the test is currently available to some oncologists, the company doesn't plan an international launch until later next year.
The number of genes analyzed in the test will grow as the number of cancer-linked genes expands. The company will release an updated version of the test once or twice a year, says Michael Pellini, the company's chief executive officer. "That's why our work with pharma and academic medical centers is so important, because we get great insight into new therapeutics coming down the pike," he says. "If a new therapy targeting a specific molecular profile is getting ready for human testing, we want to make sure we are adding that gene to our test."

A number of pharmaceutical companies are using the test in clinical trials of new drugs. For example, if a study of a specific new drug failed to show a benefit in the patient population overall but did appear to work in a subset of patients, researchers can use Foundation Medicine's test to determine if there is a particular genetic alteration that predicts who is most likely to respond. 

Companies are also using the technology to direct patients into specific studies of drugs designed to target different mutations; it can often be difficult to enroll enough patients in such studies. Furthermore, if researchers collect multiple tumor samples from the same patient over time, they can use the test to understand how the tumor evolves and try to predict why one person's tumor might recur more quickly than another's.

Pellini says at least two pharmaceutical companies are considering using the technology in all cancer clinical trials going forward. "Pharma's willingness to accept this type of molecular approach has been my single greatest surprise since joining Foundation Medicine," he says. Historically, the pharmaceutical industry has been reluctant to test drugs in only a subset of patients, because this limits the number of people who might buy the drug. 

"There has been a transformation among many pharmaceutical companies to where they understand that targeted therapeutics is the new paradigm," says Pellini. Targeting clinical trials to only the patients who are most likely to respond to a drug makes it faster and cheaper to show that a drug works. "As everyone works to turn cancer into a chronic disease, as an industry, we will have the ability to treat patients for years rather than months—pharma has caught on to those concepts," he says.

Because Foundation Medicine's test is based on sequencing genes, rather than detecting known mutations, it can also find novel genetic changes. "As a by-product, a lot of novel discovery is coming out of these efforts," says Pellini. "We are identifying novel gene fusions, translocations, and mutations, many of which have clinical significance."

For example, researchers at Foundation Medicine identified a genetic translocation—where a segment of DNA is flipped around—in cancer tissue from a patient with non-small-cell lung cancer. Subsequent studies found that this mutation, which lies in a part of the genome that is being targeted by pharmaceutical companies, is present in about 5 percent of small-cell lung cancers. Pellini says the company is still working on how to deal with such new discoveries. "We are not a therapeutic company, and our primary interest tends to be on the diagnostic side," he says. "But we recognize that some findings may have strong therapeutic implications."

By Emily Singer
From Technology Review