Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........


Quantum Physics Breakthrough: Scientists Find an Equation for Materials Innovation

By reworking a theory first proposed by physicists in the 1920s, the researchers discovered a new way to predict important characteristics of a new material before it's been created. The new formula allows computers to model the properties of a material up to 100,000 times faster than previously possible and vastly expands the range of properties scientists can study.

Professor Emily Carter and graduate student Chen Huang developed a new way of predicting important properties of substances. The advance could speed the development of new materials and technologies

"The equation scientists were using before was inefficient and consumed huge amounts of computing power, so we were limited to modeling only a few hundred atoms of a perfect material," said Emily Carter, the engineering professor who led the project.

"But most materials aren't perfect," said Carter, the Arthur W. Marks '19 Professor of Mechanical and Aerospace Engineering and Applied and Computational Mathematics. "Important properties are actually determined by the flaws, but to understand those you need to look at thousands or tens of thousands of atoms so the defects are included. Using this new equation, we've been able to model up to a million atoms, so we get closer to the real properties of a substance."

By offering a panoramic view of how substances behave in the real world, the theory gives scientists a tool for developing materials that can be used for designing new technologies. Car frames made from lighter, strong metal alloys, for instance, might make vehicles more energy efficient, and smaller, faster electronic devices might be produced using nanowires with diameters tens of thousands of times smaller than that of a human hair.

Paul Madden, a chemistry professor and provost of The Queen's College at Oxford University, who originally introduced Carter to this field of research, described the work as a "significant breakthrough" that could allow researchers to substantially expand the range of materials that can be studied in this manner. "This opens up a new class of material physics problems to realistic simulation," he said.

The new theory traces its lineage to the Thomas-Fermi equation, a concept proposed by Llewellyn Hilleth Thomas and Nobel laureate Enrico Fermi in 1927. The equation was a simple means of relating two fundamental characteristics of atoms and molecules. They theorized that the energy electrons possess as a result of their motion -- electron kinetic energy -- could be calculated based how the electrons are distributed in the material. Electrons that are confined to a small region have higher kinetic energy, for instance, while those spread over a large volume have lower energy.

Understanding this relationship is important because the distribution of electrons is easier to measure, while the energy of electrons is more useful in designing materials. Knowing the electron kinetic energy helps researchers determine the structure and other properties of a material, such as how it changes shape in response to physical stress. The catch was that Thomas and Fermi's concept was based on a theoretical gas, in which the electrons are spread evenly throughout. It could not be used to predict properties of real materials, in which electron density is less uniform.

The next major advance came in 1964, when another pair of scientists, Pierre Hohenberg and Walter Kohn, another Nobel laureate, proved that the concepts proposed by Thomas and Fermi could be applied to real materials. While they didn't derive a final, working equation for directly relating electron kinetic energy to the distribution of electrons, Hohenberg and Kohn laid the formal groundwork that proved such an equation exists. Scientists have been searching for a working theory ever since.

Carter began working on the problem in 1996 and produced a significant advance with two postdoctoral researchers in 1999, building on Hohenberg and Kohn's work. She has continued to whittle away at the problem since. "It would be wonderful if a perfect equation that explains all of this would just fall from the sky," she said. "But that isn't going to happen, so we've kept searching for a practical solution that helps us study materials."

In the absence of a solution, researchers have been calculating the energy of each atom from scratch to determine the properties of a substance. The laborious method bogs down the most powerful computers if more than a few hundred atoms are being considered, severely limiting the amount of a material and type of phenomena that can be studied.

Carter knew that using the concepts introduced by Thomas and Fermi would be far more efficient, because it would avoid having to process information on the state of each and every electron.

As they worked on the problem, Carter and Chen Huang, a doctoral student in physics, concluded that the key to the puzzle was addressing a disparity observed in Carter's earlier work. Carter and her group had developed an accurate working model for predicting the kinetic energy of electrons in simple metals. But when they tried to apply the same model to semiconductors -- the conductive materials used in modern electronic devices -- their predictions were no longer accurate.

"We needed to find out what we were missing that made the results so different between the semiconductors and metals," Huang said. "Then we realized that metals and semiconductors respond differently to electrical fields. Our model was missing this."

In the end, Huang said, the solution was a compromise. "By finding an equation that worked for these two types of materials, we found a model that works for a wide range of materials."

Their new model, published online Jan. 26 in Physical Review B, a journal of the American Physical Society, provides a practical method for predicting the kinetic energy of electrons in semiconductors from only the electron density. The research was funded by the National Science Foundation.

Coupled with advances published last year by Carter and Linda Hung, a graduate student in applied and computational mathematics, the new model extends the range of elements and quantities of material that can be accurately simulated.

The researchers hope that by moving beyond the concepts introduced by Thomas and Fermi more than 80 years ago, their work will speed future innovations. "Before people could only look at small bits of materials and perfect crystals," Carter said. "Now we can accurately apply quantum mechanics at scales of matter never possible before."

From sciencedaily.com

Scanning for Skin Cancer: Infrared System Looks for Deadly Melanoma

The prototype system works by looking for the tiny temperature difference between healthy tissue and a growing tumor.

The researchers have begun a pilot study of 50 patients at Johns Hopkins to help determine how specific and sensitive the device is in evaluating melanomas and precancerous lesions. Further patient testing and refinement of the technology are needed, but if the system works as envisioned, it could help physicians address a serious health problem: The National Cancer Institute estimated that 68,720 new cases of melanoma were reported in the United States in 2009; it attributed 8,650 deaths to the disease.

Before scanning, the targeted skin is cooled with a brief burst of compressed air.


To avert such deaths, doctors need to identify a mole that may be melanoma at an early, treatable stage. To do this, doctors now look for subjective clues such as the size, shape and coloring of a mole, but the process is not perfect.

"The problem with diagnosing melanoma in the year 2010 is that we don't have any objective way to diagnose this disease," said Rhoda Alani, adjunct professor at the Johns Hopkins Kimmel Cancer Center and professor and chair of dermatology at the Boston University School of Medicine. "Our goal is to give an objective measurement as to whether a lesion may be malignant. It could take much of the guesswork out of screening patients for skin cancer."

With this goal in mind, Alani teamed with heat transfer expert Cila Herman, a professor of mechanical engineering in Johns Hopkins' Whiting School of Engineering. Three years ago, Herman obtained a $300,000 National Science Foundation grant to develop new ways to detect subsurface changes in temperature. Working with Muge Pirtini, a mechanical engineering doctoral student, Herman aimed her research at measuring heat differences just below the surface of the skin.

Because cancer cells divide more rapidly than normal cells, they typically generate more metabolic activity and release more energy as heat. To detect this, Herman uses a highly sensitive infrared camera on loan from the Johns Hopkins Applied Physics Laboratory. Normally, the temperature difference between cancerous and healthy skins cells is extremely small, so Herman and Pirtini devised a way to make the difference stand out. First, they cool a patient's skin with a harmless one-minute burst of compressed air. When the cooling is halted, they immediately record infrared images of the target skin area for two to three minutes. Cancer cells typically reheat more quickly than the surrounding healthy tissue, and this difference can be captured by the infrared camera and viewed through sophisticated image processing.

"The system is actually very simple," Herman said. "An infrared image is similar to the images seen through night-vision goggles. In this medical application, the technology itself is noninvasive; the only inconvenience to the patient is the cooling."

The current pilot study is designed to determine how well the technology can detect melanoma. To test it, dermatologist-identified lesions undergo thermal scanning with the new system, and then a biopsy is performed to determine whether melanoma is actually present.

"Obviously, there is a lot of work to do," Herman said. "We need to fine-tune the instrument -- the scanning system and the software -- and develop diagnostic criteria for cancerous lesions. When the research and refinement are done, we hope to be able to show that our system can find melanoma at an early stage before it spreads and becomes dangerous to the patient."

Alani, the skin cancer expert, is also cautiously optimistic. "We, at this point, are not able to say that this instrument is able to replace the clinical judgment of a dermatologist, but we envision that this will be useful as a tool in helping to diagnose early-stage melanoma," Alani said. "We're very encouraged about the promise of this technology for improving our ability to prevent people from actually dying of melanoma."

The researchers envision a hand-held scanning system that dermatologists could use to evaluate suspicious moles. The technology also might be incorporated into a full-body-scanning system for patients with a large number of pigmented lesions, they said.

The skin cancer scanning system is protected under an international patent application submitted by the Johns Hopkins Technology Transfer office, with Herman, Alani and Pirtini listed as the inventors. No commercialization agreement has been reached, but the technology transfer staff has engaged in talks with investors and medical devices firms concerning possible licensing deals. Any business arrangements involving the inventors would be managed by The Johns Hopkins University in accordance with its conflict-of-interest policies.

From sciencedaily.com

Material Traps Light on the Cheap

A new photovoltaic material performs as well as the one found in today's best solar cells, but promises to be significantly cheaper. The material, created by researchers at Caltech, consists of a flexible array of light-absorbing silicon microwires and light-reflecting metal nanoparticles embedded in a polymer.



Light trap: Very little light can escape from this flexible array of silicon microwires embedded in a rubbery substrate.

Computational models suggest that the material could be used to make solar cells that would convert 15 to 20 percent of the energy in sunlight into electricity--on par with existing high-performance silicon cells. But the material would require just 1 percent of the materials used today, potentially leading to a dramatic decrease in costs. The researchers were led by Harry Atwater, professor of applied physics and materials science at Caltech.

The key to the new material's performance is its ability to trap light. The longer a photon bounces around inside the active part of any solar cell, the greater the chance it will dislodge an electron. All high-performance solar cells have antireflective coatings that help trap light. But these cells use require far more silicon and must be sawed from wafers, a wasteful process.

"The promise of light trapping has always been that you could use less silicon and bring the costs down, but it's been difficult to implement," says Eli Yablanovitch, professor of electrical engineering at the University of California, Berkeley, who was not involved with the research.

Many groups have turned to structures such as nanowires and microwires in an effort to solve this problem. The Caltech group's photovoltaic material, which uses silicon microwires, demonstrates a new level of performance largely due to the addition of reflective nanoparticles.

Atwater's group grew arrays of silicon microwires from a gas on the surface of a reusable template. The template dictates how thickly the forest of wires will grow, and the diameter of each wire. The arrays are arranged sparsely, and without further treatment, make a poor solar material. But the wires are treated with an antireflective coating and coated in a rubbery polymer mixed with highly reflective alumina nanoparticles. Once the polymer sets, the entire thing can be peeled off like a sticker. Over 90 percent of the resulting material is composed of the cheap polymer, and the template can be used again and again.

"These materials are pliable, but they have the properties of a silicon wafer," says Atwater. When light hits the composite solar mats, it bounces around, reflecting off the alumina particles until it can be absorbed by a microwire.

Solar efficiency: This diagram shows how the new composite material traps sunlight so efficiently.


Even though the microwire arrays are quite sparse, the reflective particles ensure that very little light escapes before it's absorbed. The Caltech group has not yet published details of the material's performance as part of a solar cell, but the composite has demonstrated very good numbers for light absorbance and electron carrier collection.

"There are three things a solar cell has to do: it has to absorb the light, collect all the [electrons], and generate power," says Atwater. The material can absorb 85 percent of the sunlight that hits it, and 95 percent of the photons in this light will generate an electron. Until the results are published, the Caltech group won't disclose their power generation results.

"What's exciting is, you can use a lot less material to make a solar cell--two orders of magnitude less," says Yi Cui, professor of materials science at Stanford University. This will do more than just lower the material's costs. "Once you use less material for deposition, your manufacturing line is shorter," Cui explains. This has two business implications: it should take less capital investment to build the factories needed to make the cells, and it should be possible to produce them at a faster rate.

Atwater's group is now working on making the photovoltaic material over a larger area and incorporating it into prototype solar cells. The results published so far come from proof of concept experiments using square centimeters of the material. "We have to do the normal unglamorous engineering: making low-resistance electrical contacts, and making large areas, hundreds of square centimeters," says Atwater. He adds that although the material is put together in a novel way, it can be made using a combination of techniques that are well established and scalable.

By Katherine Bourzac

From Technology Review

From Waste Biomass to Jet Fuel

A novel chemical process developed by researchers at the University of Wisconsin-Madison converts cellulose from agricultural waste into gasoline and jet fuel. It produces fuel by modifying what until now had been considered unwanted by-products (levulinic acid and formic acid) of breaking cellulose down into sugar. The work was described in this week's issue of the journal Science.

Biofuel tap: Liquid fuel drains out of the butene oligomerization reactor, the last part of a new chemical process for making biofuels from cellulose.


The process is one of a number of new technologies that make conventional fuels such as gasoline and diesel from biomass rather than petroleum. Unlike ethanol--today's most common type of biofuel--these new fuels can easily be used in conventional automobiles and transported with existing infrastructure. What's more, the jet fuel it produces stores enough energy to power commercial or military airplanes.

Up to now, however, methods to make these advanced biofuels have often involved biological processes in which microbes break down sugars derived from biomass, including cellulose. The Wisconsin method could prove more reliable than those processes because it is a chemical process that's easier to maintain. What's more, carbon-dioxide created during its production can be easily captured--an advantage over conventional biofuels.

To convert cellulose, a large component of biomass, into fuel, researchers first need to break it down into simpler components, such as simple sugars. Microorganisms then process those sugars to make liquid fuels. Cellulose can be broken down by treating it with acids, but these reactions are difficult to control--the sugars are often further converted into formic and levulinic acids. "Rather than fight it, we wondered if we could start with the unwanted product to make fuel," says James Dumesic, professor of chemical and biological engineering at the University of Wisconsin-Madison.

It is "an entirely different approach to making biofuels," says Bob Baldwin, thermochemical process manager at the National Renewable Energy Laboratory in Golden, CO, who was not involved with the work. In the Wisconsin process, the acids are combined to form gamma-valerolactone, an industrial chemical. Catalysts made of silica and alumina then help convert this to a gas called butene, which is easily converted to liquid hydrocarbon fuels, including gasoline and jet fuel.

One advantage of the Wisconsin process compared to biological routes to biofuels is that it could decrease greenhouse gas levels, says Doug Cameron, managing director and chief science advisor at Piper Jaffray. Conventional biofuels are at best carbon neutral--growing crops for biofuels takes carbon dioxide out of the atmosphere, but this is released again when the crops are grown and processed and the biofuels are manufactured and burned. The new process produces a pure and high-pressure stream of carbon dioxide, which is easy to capture and permanently store. As a result, the net carbon emissions could be negative--part of the carbon dioxide absorbed by the plants would be prevented from returning to the atmosphere.

However, economics questions remain. Baldwin says that although the process produces high yields of the desired fuels, it requires a large number of processing steps, including separating cellulose from other components of biomass, which could make it expensive. It will also need to compete with other thermochemical processes that can be adapted to work with biomass, such as those that have been used to convert coal into liquid fuels.

By Kevin Bullis

From Technology Review

Implanted Neurons Let the Brain Rewire Itself Again

Transplanting fetal neurons into the brains of young mice opens a new window on neural plasticity, or flexibility in the brain's neural circuits. The research, published today in the journal Science, suggests that the brain's ability to radically adapt to new situations might not be permanently lost in youth, and helps to pinpoint the factors needed to reintroduce this plasticity.

Flexible again: Neurons transplanted from an embryo into the brain of a young mouse are shown here. These neurons trigger a new period of neural plasticity in the animals' brains.


A better understanding of brain plasticity could one day point to new ways to treat brain injury and other neurological problems by returning the brain to a younger, more malleable state. "[The findings] reveal there must be a factor that can induce plasticity in the brain," says Michael Stryker, a neuroscientist at the University of California, San Francisco, who was involved in the research. "We hope that future studies will reveal what it is that allows the cells to induce this new period of plasticity."

In the study, researchers examined a well-known phenomenon seen in the visual system of both mice and humans, during what is known as the "critical period" of development. If young animals are deprived of visual input in one eye during this stretch--about 25 to 30 days of age in mice--their visual systems will rewire to maximize visual input from the functioning eye. As a result, vision in the other eye is permanently impaired. "The cortex says, 'I'm not getting information from this side, so just pay attention to other eye,' " says Arturo Alvarez-Buylla, also part of the UCSF team. After the critical period, depriving one eye of input has little long-term impact on vision.

To try to find out what triggers the neural plasticity seen during this period, the researchers took a specific type of neuron from the brains of fetal mice and grafted them into mice that had either just been born or were approximately 10 days old. Known as inhibitory interneurons, these cells release a chemical signal that quiets neighboring cells, making it more difficult for them to fire. The transplanted neurons, labeled with a fluorescent marker, began migrating to their normal place in the brain and making connections with resident neurons.

The mice went through the typical critical period, at about 28 days of age. But the transplanted neurons seemed to induce a second critical period, which was timed to the age of the transplanted cells rather than the age of the animals. The later critical period occurred when the transplanted neurons were about 33 to 35 days old, the same age as resident inhibitory interneurons during the normal critical period. (The neurons arise in the brain before birth.)

Scientists aren't yet sure how the cells induce this second period of malleability. Stryker's team and others had previously shown that the cells' inhibitory signaling plays a key role--the critical period can be delayed or induced earlier by mimicking the inhibitory effects of the cells with drugs, such as valium. But in these previous experiments, it was not possible to induce a second critical period after the normal one. "Once you've had it, can never get another one, at least until these transplant experiments," says Stryker. "That shows there is something other than just the inhibitory [chemical] they release that must be involved in this process." Researchers plan to transplant different types of inhibitory neurons, in an attempt to find the specific cell type responsible.

"I would love to see if the same sort of transplant worked in older animals," says Jianhua Cang, neuroscientist at Northwestern University, in Chicago. "This work is a significant advance, but if one can do it in adult animals, it would be even more remarkable. And it opens the possibility of therapeutic potential." Cang was not involved in the current research, thought he has previously worked with the authors.

The findings could have wide-reaching implications for how we think about the nature of plasticity in the brain. Humans have a similar critical period, though in humans this phase is more extended than in mice. Infants and children with a lazy eye or a cataract will suffer permanent vision loss if the problem isn't corrected before about eight years of age, says Takao Hensch, a neuroscientist at Children's Hospital Boston, who was not involved in the current study. (During normal development, this period of plasticity is thought to be important for developing balanced input from both eyes, which is crucial for binocular vision.)

The phenomenon isn't limited to the visual system--scientists think that most parts of the cortex undergo a similar period of heightened malleability. For example, children fail to hear certain sounds after a particular age. "The classic example is kids growing up in Japan," says Hensch. "They eventually lose the ability to differentiate between 'R' and 'L' sounds."

If scientists can find a controlled way to trigger plasticity in specific parts of the brain, it would open new avenues for treatment of a variety of ailments. Adults who suffer brain damage from stroke or head trauma have some level of reorganization in the brain--enhancing that plasticity might improve recovery.

"Many psychiatric illnesses are recognized as having neurodevelopmental origins, in particular, deficits in inhibitory circuits," says Hensch. For example, many genes linked to autism may trigger an imbalance in excitation and inhibition in neural signaling, he says. "If you can restore that imbalance, you might imagine intervening during development or later in life to try to restore brain function."

Still, a long road lies ahead. To apply this type of cell transplant to humans, scientists would first need to develop a reliable source of the necessary cells, perhaps from induced pluripotent stem cell reprogramming. They would then need to show the cells can be safely transplanted into the brain. Figuring out how to properly capitalize on the newfound plasticity presents another hurdle. It's not clear whether patients would need some kind of specific training or drug treatment to properly reorganize damaged neural circuits. "For higher cognitive functions, you might need to train people cognitively in the presence of plasticity-enhancing neurons," says Hensch.

"Ideally, it would be nice to find a way to coax [the birth of new neurons in the brain] through some pharmacological or environmental means to get more of these inhibitory cells to appear," he says. "That seems like quite a challenge, but this research gives us hope it's worth trying."

By Emily Singer

From Technology Review

Bloom Reveals New Fuel Cells

The up-to-now secretive startup Bloom Energy took the wraps off its technology this week, unveiling a fuel-cell system that the company claims can run on a variety of fuels and pay for itself in three to five years via lower energy bills.

The company's founder and CEO, KR Sridhar, said at the official unveiling of the company on Wednesday that the technology--when it's powered by natural gas--can cut carbon dioxide emissions in half compared to the emissions produced conventional power sources, on average. Several major companies, including Google, eBay, and Walmart, have already bought Bloom's technology, and in the few months these fuel cells have been in operation, they've generated 11 million kilowatt hours of electricity (about enough to power 1,000 homes for a year).

Bloom box: A 100 kilowatt module at eBay's facility in San Jose, CA.


According to Bloom Energy, electricity costs are lower than buying electricity from the grid because the fuel cells are efficient and because the electricity is generated on-site, avoiding the need for a grid to distribute electricity.

While Bloom is not releasing full details of the technology, it's a type of solid-oxide fuel cell (SOFC). Unlike hydrogen fuel cells proposed for use in vehicles, SOFCs operate at high temperatures (typically well over 600 ºC) and can run on a variety of fuels. They can be more efficient than conventional turbines for generating electricity. But their high cost and reliability problems have kept them from widespread commercial use.

Sridhar says Bloom's technology has made the fuel cells affordable. What's more, costs are expected to decrease significantly as production ramps up.

"All indications are that they have taken pretty conventional SOFC technology (zirconia electrolyte, nickel anode) and spent a lot of money to do a very good job of engineering and process development," says Jeff Bentley, CEO of CellTech Power, which is developing its own fuel cells that can run on fuels such as diesel and even coal. According to Bloom, the technology is based on planar solid oxide fuel cells that Sridhar developed as a professor at the University of Arizona.

Bloom sells 100-kilowatt modules. They're made of small, flat 25-watt fuel cells that can be stacked together. A complete 100-kilowatt module, with multiple stacks and equipment for converting DC power from the stacks into AC power to be used in buildings, is about the size of a parking space. The company says each module can power a small supermarket.

In addition to Google, eBay, and Walmart, Bloom's customers include Bank of America, Coca-Cola, Cox Enterprises, FedEx, and Staples. A 400-kilowatt system powers a building at Google that contains an experimental data center. Walmart has installed Bloom modules at two locations, where they generate between 60 to 80 percent of the electricity for the stores.

Sridhar said the long-term goal is to use the technology as both a way to generate electricity and to store it. It's possible to run the fuel cells in reverse, pumping in electricity to generate fuel. The system could then be used to store solar power generated during the day as a fuel for use at night. He says such a system, however, won't be available for another decade.

The company first started raising venture capital in 2001, and was the first energy company to be funded by Kleiner Perkins Caufield & Byers, a venture capital firm based in Menlo Park, CA, that was an earlier investor in Google. "They have spent a lot of time and money in field testing prior to making any public claims--that is refreshing for the fuel cell industry," Bentley says

By Kevin Bullis

From Technology Review

Less Is More in Google's OS


Most of this article was written on a six-year-old computer running Google's new Chromium OS.

"Chromium OS" is the open-source version of the new Chrome OS that Google is developing for netbooks, tablets, and other lightweight machines. It's built from the source code that Google is making widely available, but it runs on standard hardware. Google's Chrome OS, in contrast, is designed to run on a new generation of stripped-down systems. These systems will probably be missing some of that legacy hardware necessary to run Windows, but will make up for this with a somewhat lower cost, lightning-fast boot times, and even some added security measures that will make them practically virus-proof.

You can download and run Chromium OS today if you know where to find it, but be careful. If you just Google for "chrome OS download," you'll probably end up with a modified version of Suse Linux--the "fake Google Chrome OS." Ironically this will run just fine on most netbooks, but you've got to be a Linux master to use it. You can edit files on the Web with Google Docs, or on the local computer with Open Office (which is included), but you need to keep track of where your documents are saved and manually move them back and forth. You can download and install cool programs from Linux software repositories, but there is always a chance that a program that you download might hack your system and steal your data. And you need to remember to run "software update" on a regular basis to install those security patches. In many ways the experience is quite similar to running Microsoft Windows or MacOS.

In contrast, the real Chromium OS is a completely different approach to operating systems. What you get is a whole lot less than a fine Linux distribution with a bunch of open-source software. Perplexingly, this ends up being a whole lot more useful, user-friendly, and secure.

Think of Chromium OS as a copy of Google's Chrome browser running on top of a Linux kernel. The version I tested has no window manager (which means no overlapping windows)--instead, the browser's window expands to fill the computer's screen. The developer versions require that you log in (although everybody uses the same username and password). The browser then opens to Google's home page, and you log in to your Google account. To edit a document you just click on "More," then "Documents," then edit the document in Google Docs.

There's not a lot to configure with Chromium OS, and that's kind of the point. The browser has a little wrench-like icon that lets you change your time zone, or the sensitivity of the touch-pad, or enable tap-to-click. You can connect to a Wi-Fi network, specify a home page (although the default is Google, of course), and you can tell Chromium to save your passwords. You can also delete your cookies, clear your cache, and restore the system to defaults. Bookmarks sync to your Google account. And that's pretty much it, at least for now.

The power of this operating system comes from fact that you can't download and install software. Other than the Web browser, anything that you might want to run has to be run out there on the Internet, ideally from Google's cloud. For example, I clicked on a Web page containing a link to a PDF, and Chromium showed me the PDF's contents by running it through Google Docs. I tried looking at my bookmarks, and they were all there--synched through my Google account. Even a YouTube video worked, although it was kind of slow on this old system.


What's really revolutionary about Chrome is the way the operating system will secure and update itself. According to a video posted on the Google website by Google security engineer Will Drewry, Google knows in advance every application that will be run inside the Chrome OS. "We can secure them appropriately. Everything else is a web app."

This means that things like Facebook, YouTube, and Google Calendar will run just fine. So will Quicken--provided that you are using the Web-based version rather than the one you download and install.

Of course even the rather small set of software installed on the netbook will need to be updated from time to time. When that happens, Chrome OS will actually create a second bootable copy of the operating system on the computer's disk. Chrome OS will try to run this second copy the next time it boots. If the second copy works, it will become the primary copy. If it fails, the netbook will notice that the second copy didn't work, it will report this back to Google, wipe the second copy, and try again. This is the same kind of technology that Google perfected for updating Windows-based helper apps like Google Toolbar and Google Desktop Search.

User data will be kept encrypted in a second partition on the netbook. If for some reason the netbook really won't reboot, it will be possible to create a USB drive that will wipe the operating system and reload it with a fresh version--but leave the user's personalized files intact. (Chrome OS will almost certainly support some kind of offline access to user data, probably using the "Google Gears" system that can be used now to provide offline access to Gmail or Google Reader when using Firefox, Internet Explorer, or the Chrome browser.


Although you can download and run Chromium OS now, I don't recommend it. It's hard to find reliable distributions of Chromium online (I tried several before settling on the one available at chromeos.hexxeh.net), the hardware integration is weak, and the user interface is really not finished. (For an idea of where Google is heading, check out this video by "Glen" at Google, or look at these Chrome OS tablet mock-ups.)

Even so, Chrome OS has big potential, and it could fundamentally change the way many of us interact with computers--so stay tuned.

By Simson Garfinkel

From Technology Review

A Brain Implant that Uses Light

Researchers at Medtronic are developing a prototype neural implant that uses light to alter the behavior of neurons in the brain. The device is based on the emerging science of optogenetic neuromodulation, in which specific brain cells are genetically engineered to respond to light. Medtronic, the world's largest manufacturer of biomedical technologies, aims to use the device to better understand how electrical therapies, currently used to treat Parkinson's and other disorders, assuage symptoms of these diseases. Medtronic scientists say they will use the findings to improve the electrical stimulators the company already sells, but others ultimately hope to use optical therapies directly as treatments.

Light therapy: A neuron (green) engineered to express a light-sensitive protein fires in response to specific wavelengths of light. A glass electrode (lower left corner) records the neuron’s electrical response. Researchers from Medtronic used this system to confirm that a new implantable stimulator can properly activate neurons with light.


Today's neural implants work by delivering measured doses of electrical stimulation via a thin electrode surgically inserted through a small hole in a patient's skull, with its tip implanted in a localized brain area. Since the U.S. Food and Drug Administration approved such "brain pacer" devices and the electrically based treatment they deliver --called Deep Brain Stimulation (DBS)--for a disorder called essential tremor in 1997, for Parkinson's disease in 2002, and for dystonia in 2003, over 75,000 people have had them installed. The electrical pulses are thought to counter the abnormal neural activity that results from different diseases, though physicians know little about how DBS works.

Despite their success, such neural prostheses have serious drawbacks. Beyond the blunt fact of their physical locations, they stimulate neurons near the electrode indiscriminately. That overactivity can trigger dizziness, tingling, and other side effects. Furthermore, they produce electrical "noise" that makes tracking quieter neural signals difficult and the simultaneous use of scanning systems like MRI practically impossible, which in turn prevents researchers from gaining any evidence about how DBS actually works.

In the last few years, scientists have developed a way to stimulate neurons using light rather than electricity. Researchers first introduce a gene for a light-sensitive molecule, called channelrhodopsin 2 (ChR2), into a specific subset of neurons. Shining blue light on these neurons then causes them to fire. One advantage of this approach is its specificity--only the neurons with the gene are activated. It also provides a way to shut neurons off--introducing a different molecule, halorhodopsin (NpHR), silences the cells in response to yellow light. "That's the other unique thing about this approach," says Tim Denison, senior IC engineering manager in Medtronic's neuromodulation division. "It allows us to silence neurons' activity, which is extraordinarily difficult with electrostimulation."

While academic scientists are developing new tools to deliver light to the brain, Medtronic is developing an optogenetically based implant for commercial use. The module, which is approximately the size and shape of a small USB flash drive, has wireless data links, a power management unit, a microcontroller, and an optical stimulator. It uses a fiber-optic wire to direct light from a blue or green LED at target neurons in the brain. The company plans to market the device to neuroscience researchers and use it for in-house research on the effects of DBS.

Medtronic scientists emphasize the very early nature of the device. "This is research for use with animal models and not ready for any kind of human translation currently," emphasizes Denison. Still, he continues: "What's exciting is that therapies today remain based on these electrically based ideas from the 19th century. Now this novel, disruptive technology offers a unique interface to the nervous system."

Today, over 500 laboratories are applying optogenetic tools to animal models of Parkinson's, blindness, spinal injury, depression, narcolepsy, addiction, and memory. Medtronic, which has built its business by pioneering market implementation of medical research, has consulted extensively with optogenetic pioneers Karl Deisseroth of Stanford and Ed Boyden of MIT to build an implant to support this new science. (Boyden is an occasional columnist for Technology Review.)

In order to transform the research implant into a clinical device, Medtronic or others will need to find ways to safely deliver the necessary genes to specific neural circuits in the brain. Denison says he thinks that development of practical optogenetic-based therapies for human patients will be gradual. "Frankly, this is a technology I can see my son working on as a Medtronic employee," he says.

MIT's Boyden, however, envisions a more accelerated development: "I think it's more in the three- to 10-year span," he says. Boyden has cofounded a company, Eos, to develop gene therapies to cure blindness. (Because it targets the eye, this therapy would not require an implant.) Jerry Silver of Case-Western University has a startup, LucCell, that aims at such therapies to restore damaged spinal cord function. "Gene therapy is a maturing field," says Silver. "There's a virus type called AAV --adeno-associated virus--that is natural, that almost all of us already carry, that has no symptoms, and that already has been used in many hundreds of patients without a single serious adverse event."

Overall, Boyden concludes: "In many neural or psychiatric disorders, a very small fraction of brain cells have very big alterations --Parkinson's is the death of perhaps a few thousand cells. If with optogenetics you can correct those downstream targets without altering all the 'normal neurons'--in quotes--you could solve our present problem, which is that every drug for treating brain disorders has very serious side-effects and neural implants are extremely blunt instruments. So that's the hope."

By Mark Williams

From Technology Review

Brain System Behind General Intelligence Discovered

The study, to be published the week of February 22 in the early edition of the Proceedings of the National Academy of Sciences, adds new insight to a highly controversial question: What is intelligence, and how can we measure it?

The research team included Jan Gläscher, first author on the paper and a postdoctoral fellow at Caltech, and Ralph Adolphs, the Bren Professor of Psychology and Neuroscience and professor of biology. The Caltech scientists teamed up with researchers at the University of Iowa and USC to examine a uniquely large data set of 241 brain-lesion patients who all had taken IQ tests. The researchers mapped the location of each patient's lesion in their brains, and correlated that with each patient's IQ score to produce a map of the brain regions that influence intelligence.

The brain regions important for general intelligence are found in several specific places (orange regions shown on the brain on the left). Looking inside the brain reveals the connections between these regions, which are particularly important to general intelligence. In the image on the right, the brain has been made partly transparent. The big orange regions in the right image are connections (like cables) that connect the specific brain regions in the image on the left.


"General intelligence, often referred to as Spearman's g-factor, has been a highly contentious concept," says Adolphs. "But the basic idea underlying it is undisputed: on average, people's scores across many different kinds of tests are correlated. Some people just get generally high scores, whereas others get generally low scores. So it is an obvious next question to ask whether such a general ability might depend on specific brain regions."

The researchers found that, rather than residing in a single structure, general intelligence is determined by a network of regions across both sides of the brain.

"One of the main findings that really struck us was that there was a distributed system here. Several brain regions, and the connections between them, were what was most important to general intelligence," explains Gläscher.

"It might have turned out that general intelligence doesn't depend on specific brain areas at all, and just has to do with how the whole brain functions," adds Adolphs. "But that's not what we found. In fact, the particular regions and connections we found are quite in line with an existing theory about intelligence called the 'parieto-frontal integration theory.' It says that general intelligence depends on the brain's ability to integrate -- to pull together -- several different kinds of processing, such as working memory."

The researchers say the findings will open the door to further investigations about how the brain, intelligence, and environment all interact.

The work at Caltech was funded by the National Institutes of Health, the Simons Foundation, the Deutsche Akademie der Naturforscher Leopoldina, and a Global Center of Excellence grant from the Japanese government.

From sciencedaily.com

Mice Get Human Livers

Scientists at the Salk Institute for Biological Studies have engineered a mouse with a mostly human liver by injecting human liver cells, or hepatocytes, into genetically engineered mice. Researchers say the mouse/human chimera could serve as a new model for discovering drugs for viral hepatitis, a disease that has been notoriously difficult to replicate and study in the lab. The team exposed the altered mice to hepatitis B and C viruses and, after treating the rodents with conventional drugs, found that the mice responded much like human patients.

Hepatitis model: Human liver cells (in green) that are injected into mice take over, creating a mostly human liver. When infected with hepatitis B, scientists can study its effects in vivo and test new drugs on the "humanized" liver.


In the United States, 1.2 million Americans are infected with chronic hepatitis B, and 3.2 million with chronic hepatitis C. Searching for effective treatments and drug combinations for viral hepatitis has been a frustrating challenge for years.

In the laboratory, hepatitis and the liver cells it infects can be cagey and temperamental. Human liver cells immediately change character when taken out of the body, and are difficult to grow in a petri dish. What's more, hepatitis only infects humans and chimps, having virtually no effect in other species, meaning conventional lab animals like mice and rats are useless as live models. "You could do chimp studies, but that is not very convenient, and it is of course an ethical issue," says Karl-Dimiter Bissig, first author of the group's paper, published in the Journal of Clinical Investigation. "There's really a need to develop animal models where you can make a human chimerism and study the virus."

Bissig says his group's mouse/human chimera improves on a similar model developed several years ago that was genetically engineered to give human liver cells a growth advantage when injected into a mouse liver. Researchers engineered the mouse with a gene that destroyed its own liver cells. This programmed death gave human liver cells an advantage, and when researchers injected human hepatocytes, they were able to take over and repopulate the mouse liver. However, scientists found that the genetically engineered mice tended to die off early, which required injecting human liver cells within the first few weeks after birth--a risky procedure that often resulted in fatal hemorrhaging.

Instead, Bissig and his colleagues, including Inder Verma of the Salk Institute, sought to engineer a mouse chimera in which the introduction of human liver cells could be easily controlled. The group first engineered mice with several genetic mutations, which eliminated production of immune cells so that the mice would not reject human liver cells as foreign. The researchers made another genetic mutation that interfered with the breakdown of the amino acid tyrosine. Normally, tyrosine is involved in building essential proteins. To keep a healthy balance, the liver clears out tyrosine, keeping it from accumulating to toxic levels. Bissig engineered a mutation in mice that prevents tyrosine from breaking down, instead causing tyrosine to build up in liver cells, eventually killing the mouse cells, giving the human cells an advantage.

To avoid killing mouse liver cells too early (or killing the mice entirely), Bissig's team administered a drug that blocks the toxic byproducts of tyrosine buildup from killing liver cells. By putting the mice on the drug, and taking them off the drug a little at a time, researchers found that they could control the rate at which rodent liver cells died off.

The team then injected mice with hepatocytes from various human donors, and found that the cells were able to take over 97 percent of the mouse liver. The "humanized" mice were then infected with hepatitis B and C, and researchers found high levels of the virus in the bloodstream--versus normal mice, which are impervious to the disease and are able to clear the virus out quickly.

Bissig and his colleagues went a step further and treated the infected mice with a drug typically used to treat humans with hepatitis C. They found that, after treatment, the mice exhibited a thousand-fold decrease in viral concentration in the blood, similar to drug reactions in human patients.

Charles Rice, who heads the laboratory for virology and infectious disease at Rockefeller University, says the new chimeric model is a robust improvement over existing study models for viral hepatitis. Further improvements, Rice explains, could include engineering human cell types, other than hepatocytes, that also appear in the human liver. While the majority of the human liver is composed of hepatocytes, there are a few other cell types that may interact with hepatocytes and affect how a virus infects the liver. Engineering other liver cells could more accurately depict a working human liver and its response to disease.

Raymond Chung, associate professor of medicine at Harvard Medical School, suggests another improvement in designing an accurate mouse/human liver: to engineer a mouse with a human immune system. "This is still not an ideal model," says Chung of Bissig's research. "You can't necessarily accurately evaluate antiviral drugs given the lack of adaptive immune response in these animals."

Bissig says that in the future, he and his team hope to add a human immune system to their mouse model, so they can see how hepatitis acts, not only in a human liver, but in the presence of a normal, healthy human immune system.

By Jennifer Chu

From Technology Review

The Stimulus Bill, One Year Later

In the year since it was enacted by Congress, the federal stimulus bill has helped the solar and wind markets grow in the United States, but has done relatively little to boost domestic renewable energy manufacturing.

Last February's stimulus bill, aka the American Recovery and Reinvestment Act, allocated $45.1 billion for renewable energy, energy efficiency, and other energy-related programs and incentives. As expected, due to the multiyear nature of many of the projects, most of that money hasn't been spent yet. For example, of the $36.7 billion the U.S. Department of Energy has to spend, so far it has distributed just $2.4 billion (although it's announced awards totaling $25.4 billion).

Solar recovery: A worker at a New Mexico plant owned by the German company Schott Solar inspects a solar receiver, which absorbs concentrated sunlight to help generate electricity. The plant has increased its workforce in part because of last year’s Recovery Act.


But the money that has been spent on renewable energy--and the anticipation by investors of more to come--has helped increase the size of the solar and wind markets in the United States. The biggest help has come from grants for building renewable energy projects, such as solar and wind farms. This money--$2.3 billion has been spent so far--helped turn around what was expected to be a dismal year for wind and solar markets in the United States, says Edward Feo, a partner in the law practice of Milbank Tweed Hadley & McCloy. Feo says that experts expected the wind market to drop sharply in 2009 because of the poor economy and tight credit markets. Most presumed there would be half as many installations as the year before. Instead, wind installations increased from about 8,000 megawatts of wind power in 2008 to almost 10,000 megawatts in 2009, he says.

Similarly, the solar industry continued to grow last year. So far the grants have allowed 182 solar energy projects, according to the Solar Energy Industry Association. Altogether, stimulus-related incentives have created 18,000 jobs in the United States, the association says.

The impact of the stimulus is expected to be even greater this year. Last year, the grants were only available for the second half of the year; they'll be available for the whole of 2010. What's more, none of the loan guarantees that had been authorized by the stimulus for renewable energy projects have been issued so far. Such loan guarantees, some of which are expected this year, could be key for some projects to get financed, says Feo. This is especially true for large-scale solar thermal power plants. (There have been a few loan guarantees issued from a 2005 energy bill, such as one to the solar company Solyndra, and another to Nordic Windpower.)

But the larger markets for wind and solar don't immediately translate into manufacturing jobs in renewable energy--at least not within the United States. They do create jobs installing wind and solar farms and maintaining them, but 75 percent of the jobs that these projects generate are not in installation or maintenance, but rather in manufacturing the solar panels and wind turbines, says Joan Fitzgerald, director of the Law, Policy and Society Program at Northeastern University. And for the most part, this manufacturing is done outside of the United States, in places such as Germany and China. In addition to further incentives to grow the market for renewable energy, such as a national requirement that utilities use a certain amount of it, it's important to have incentives to help companies build new factories and retool existing ones, she says.

Some such incentives exist now, such as the loan guarantees made possible under the 2005 energy bill. The stimulus bill also established a manufacturing tax credit that would provide credits equal to 30 percent of the cost of constructing factories that make clean energy products (this includes making solar panels or wind turbines, for example, as well the parts or machines needed to make them). Last month, President Obama announced 183 projects that will get these credits, which will come to $2.3 billion and will fund projects worth about $7.7 billion. Last year the administration also granted loans for the manufacturing of advanced technology vehicles, such as electric cars. The loans were established by the Energy Independence and Security Act of 2007, but hadn't been granted.

The stimulus bill has started to draw manufacturing from foreign companies to the United States. In part because of the bill, a solar factory in New Mexico owned by the Germany-based Schott Solar doubled the number of employees assembling solar panels. The Chinese company Suntech pushed up plans to build a factory in Arizona by one to two years in response to the Recovery Act, says Roger Efird, the managing director of Suntech America, a branch of Suntech Power.

Although the money hasn't been spent yet for two key projects--the smart grid and high-speed rail--the DOE and the U.S. Department of Transportation have announced who the money will be awarded to, and will soon start to distribute funds. The smart grid and high-speed rail projects' funding will come to $12.5 billion.

By Kevin Bullis

From Technology Review

Viruses Helped Shape Human Genetic Variability

Viruses have represented a threat to human populations throughout history and still account for a large proportion of disease and death worldwide. The identification of gene variants that modulate the susceptibility to viral infections is thus central to the development of novel therapeutic approaches and vaccines. Due to the long relationship between humans and viruses, gene variants conferring increased resistance to these pathogens have likely been targeted by natural selection. This concept was exploited to identify variants in the human genome that modulate susceptibility to infection or the severity of the ensuing disease.

New research shows that viruses have played a role in shaping human genetic variability.


In particular, the authors based their study on the idea that populations living in different geographic areas have been exposed to different viral loads and therefore have been subjected to a variable virus-driven selective pressure. By analysing genetic data for 52 populations distributed worldwide, the authors identified variants that display higher frequency where the viral load is also high. Using this approach, they found 139 human genes that modulate susceptibility to viral infections; the protein products of several of these genes interact with one another and often with viral components.

The study relied on predictions generated in computer simulations; therefore, experimental validation of these results will be required. The authors conclude that approaches similar to the one they applied might be used to identify susceptibility variants for infections transmitted by pathogens other than viruses.

From sciencedaily.com

Youngest Extra-Solar Planet Discovered Around Solar-Type Star

The giant planet, six-times the mass of Jupiter, is only 35 million years old. It orbits a young active central star at a distance closer than Mercury orbits the Sun. Young stars are usually excluded from planet searches because they have intense magnetic fields that generate a range of phenomena known collectively as stellar activity, including flares and spots. This activity can mimic the presence of a companion and so can make extremely difficult to disentangle the signals of planets and activity.

Artistic impression of BD+20 1790b.


University of Hertfordshire astronomers, Dr Maria Cruz Gálvez-Ortiz and Dr John Barnes, are part of the international collaboration that made the discovery.

Dr Maria Cruz Gálvez-Ortiz, describing how the planet was discovered, said: "The planet was detected by searching for very small variations in the velocity of the host star, caused by the gravitational tug of the planet as it orbits -- the so-called 'Doppler wobble technique.' Overcoming the interference caused by the activity was a major challenge for the team, but with enough data from an array of large telescopes the planet's signature was revealed."

There is currently a severe lack of knowledge about early stages of planet evolution. Most planet-search surveys tend to target much older stars, with ages in excess of a billion years. Only one young planet, with an age of 100 million years, was previously known. However, at only 35 million years, BD+20 1790b is approximately three times younger. The detection of young planets will allow the testing of formation scenarios and to investigate the early stages of planetary evolution.

BD+20 1790b was discovered using observations made at different telescopes, including the Observatorio de Calar Alto (Almería, Spain) and the Observatorio del Roque de los Muchachos (La Palma, Spain) over the last five years. The discovery team is an international collaboration including: M.M. Hernán Obispo, E. De Castro and M. Cornide (Universidad Complutense de Madrid, Spain), M.C. Gálvez-Ortiz and J.R. Barnes, (University of Hertfordshire, U.K.), G. Anglada-Escudé (Carnegie Institution of Washington, USA) and S.R. Kane (NASA Exoplanet Institute, Caltech, USA).

From sciencedaily.com

Solar Cells Use Nanoparticles to Capture More Sunlight

Inexpensive thin-film solar cells aren't as efficient as conventional solar cells, but a new coating that incorporates nanoscale metallic particles could help close the gap. Broadband Solar, a startup spun out of Stanford University late last year, is developing coatings that increase the amount of light these solar cells absorb.

Based on computer models and initial experiments, an amorphous silicon cell could jump from converting about 8 percent of the energy in light into electricity to converting around 12 percent. That would make such cells competitive with the leading thin-film solar cells produced today, such as those made by First Solar, headquartered in Tempe, AZ, says Cyrus Wadia, codirector of the Cleantech to Market Program in the Haas School of Business at the University of California, Berkeley. Amorphous silicon has the advantage of being much more abundant than the materials used by First Solar. The coatings could also be applied to other types of thin-film solar cells, including First Solar's, to increase their efficiency.

Solar antenna: The square at the center is an array of test solar cells being used to evaluate a coating that contains metallic nanoantennas tuned to the solar spectrum.


Broadband believes its coatings won't increase the cost of these solar cells because they perform the same function as the transparent conductors used on all thin-film cells and could be deposited using the same equipment.

Broadband's nanoscale metallic particles take incoming light and redirect it along the plane of the solar cell, says Mark Brongersma, professor of materials science and engineering at Stanford and scientific advisor to the company. As a result, each photon takes a longer path through the material, increasing its chances of dislodging an electron before it can reflect back out of the cell. The nanoparticles also increase light absorption by creating strong local electric fields.

The particles, which are essentially nanoscale antennas, are very similar to radio antennas, says Brongersma. They're much smaller because the wavelengths they interact with are much shorter than radio waves. Just as conventional antennas can convert incoming radio waves into an electrical signal and transmit electrical signals as radio waves, these nanoantennas rely on electrical interactions to receive and transmit light in the optical spectrum.

Their interaction with light is so strong because incoming photons actually couple to the surface of metal nanoparticles in the form of surface waves called plasmons. These so-called plasmonic effects occur in nanostructures made from highly conductive metals such as copper, silver, and gold. Researchers are taking advantage of plasmonic effects to miniaturize optical computers, and to create higher-resolution light microscopes and lithography. Broadband is one of the first companies working to commercialize plasmonic solar cells.

In his lab at Stanford, Brongersma has experimented with different sizes and shapes of metallic nanostructures, using electron-beam lithography to carve them out one at a time. Different sizes and shapes of metal particles interact strongly with different colors of light, and will direct them at varying angles. The ideal solar-cell coating would contain nanoantennas varying in size and shape over just the right range to take advantage of all the wavelengths in the solar spectrum and send them through the cell at wide angles. However, this carving process is too laborious to be commercialized.

Through his work with Broadband, Brongersma is developing a much simpler method for making the tiny antennas over large areas. This involves a technique called "sputter deposition" that's commonly used in industry to make thin metal films (including those that line some potato-chip bags). Sputtering works by bombarding a substrate with ionized metal. Under the right conditions, he says, "due to surface tension, the metal balls up into particles like water droplets on a waxed car." The resulting nanoparticles vary in shape and size, which means they'll interact with different wavelengths of light. "We rely on this randomness" to make the films responsive to the broad spectrum found in sunlight, he says.

Broadband is currently developing sputtering techniques for incorporating metal nanoantennas into transparent conductive oxide films over large areas. Being able to match the large scale of thin-film solar manufacturing will be key to commercializing these coatings.

The company has been using money from angel investors to test its plasmonic coatings on small prototype cells. So far, says Brongersma, enhanced current from the cells matches simulations. Broadband is currently seeking venture funding to scale up its processes, says CEO Anthony Defries.

By Katherine Bourzac

From Technology Review

A Personalized Tumor Tracker

Tracking tiny amounts of a patient's unique cancer DNA could provide a new way of detecting small tumors or stray cancer cells that linger after treatment. Researchers from Johns Hopkins University and Life Technologies Corporation, a biotechnology tools company, used fast and cheap sequencing methods to spot genetic alterations in breast and bowel cancers in individual patients. Once found, the researchers used stretches of rearranged DNA to construct personalized biomarkers that allow them to detect even faint traces of tumor DNA.

Tracking cancer: Victor Velculescu (shown here), a physician and researcher at Johns Hopkins University, has developed a new way to use DNA sequencing to track the subtle signs of cancer that may linger after treatment.


As our understanding of cancer grows, scientists are beginning to view it as a chronic disease that is very difficult to eliminate entirely. Technologies like this one could provide ways to track it and keep it in check. The technique exploits a well-known tendency of tumors to sport scrambled chromosomes. Such swapping around of big chunks of DNA may in fact be one of the key events contributing to cells becoming cancerous in the first place.

Existing biomarkers are mostly protein-based and available for only some types of cancers. An example is the PSA protein that indicates prostate cancer. But because these proteins aren't always unique to cancer cells, they aren't very sensitive. "Our genetic markers work because they are extremely different" from the DNA in healthy cells, says Victor Velculescu, who led the research. "We could easily find one piece of cancer DNA among 400,000 normal ones." The research was published today in the journal Science Translational Medicine.

While scientists have long known that cancer cells tend to harbor scrambled DNA, using this information to track the progression of cancer, or the effectiveness of treatment, has been a challenge. That's because the precise nature of the genetic change is different in each patient, making these markers hard to find. The notable exceptions are several types of blood cancers that always display the same type of DNA rearrangement.

To tackle solid tumors with unpredictable genetic changes, Velculescu's team turned to new sequencing technologies that have brought sequencing costs down tremendously over the past few years. Cheap sequencing meant the scientists could search the entire genome for signs of cancer. They used technology from Applied Biosystems, part of Life Technologies, to sequence the genomes of four bowel and two breast cancer genomes along with the genomes of four patients' healthy tissue.

Applied Biosystems approach works by chopping the genome up into 200 million pieces that are each about 1,500 DNA base pairs long, and then sequencing just the 25 base pairs at the edges of these pieces, yielding pairs of mated tags. By comparing the sequences of these DNA ends against a healthy reference genome as well as between the patients' normal and tumor genomes, the researchers could spot rearrangements between chunks of DNA. Velculescu's team found about nine regions of swapped DNA in each tumor, providing unique biomarkers for each patient's tumors.

The researchers then tracked the level of abnormal DNA as one of the colon cancer patients underwent different types of treatment. After surgery, chemotherapy, and surgical removal of metastases from the liver, the level of cancer-specific DNA in his blood dropped from 37 percent to 0.3 percent. This showed that some cancer cells still remained in the liver, indicating a need to remain vigilant and consider further treatment.

The research is an "exciting step down the road toward personalized cancer medicine," says Peter Johnson at the University of Southampton and Cancer Research UK's chief clinician. "The detection of DNA changes, unique to individual cancers, has proved to be a powerful tool in guiding the treatment of leukemia. If this can be done for other types of cancer like bowel, breast, and prostate, it will help us to bring new treatments to patients better and faster than ever."

Velculescu says the biggest caveat to a wide clinical use of the technique is the cost. While the price of sequencing has dropped dramatically, the analysis costs around $5,000 per genome. In an editorial accompanying the paper, Ludmila Prokunina-Olsson and Stephen Chanock of the National Cancer Institute, in Bethesda, MD, point out that researchers will need to sequence a number of cancer genomes before the approach can be put into clinical practice. For example, scientists need to assess how reliably these DNA rearrangements can be detected, whether certain types of rearrangements are most useful in tracking cancer, and whether certain parts of the genome tend to harbor these changes. In addition, researchers need to show that detecting latent cancer DNA can help tailor treatment, improving a patient's long-term health.

By Nora Schultz

From Technology Review

Rewinding the Clock for Aging Cells

Reverting skin cells from people with a premature aging disease back to a more embryonic state appears to overcome the molecular defect in these cells. People with the disease have abnormally short telomeres, a repetitive stretch of DNA that caps chromosomes and shrinks with every cell division, even in healthy people.

Researchers from Children's Hospital Boston found that reprogramming the skin cells, using induced pluripotent stem cell technology, lengthened the telomeres in the cells. The reprogramming process activated the telomerase enzyme, which is responsible for maintaining telomeres. The research was published today in the online version of the journal Nature.


Aging cells: Reprogramming skin cells from patients with a premature aging disease appears to lengthen telomeres (green), repetitive DNA sequences that cap chromosomes (blue). Telomere length is a measure of cellular "aging" and determines how many times a cell can divide.


The research adds to previous findings suggesting that enhancing activity of the telomerase enzyme might benefit patients with premature aging disorders. The study also provides a new tool for studying telomerase, an enzyme of great interest to scientists working on both aging and cancer. The shortening of telomeres over a lifetime is thought to be tied to aging. And abnormal activation of telomerase in cancer cells allows them to proliferate uncontrollably. While scientists already knew that reprogramming could lengthen telomeres in cells from healthy people, it was unclear if the same could happen in cells with defective telomerase.

Telomerase is most active in stem cells, allowing these cells to maintain their telomere length and divide indefinitely. The telomeres of differentiated cells, such as skin cells, shorten with every cell division, limiting their lifespan. (The discovery of the enzyme in the 1980s was awarded the Nobel Prize in Physiology or Medicine last year.)

People with a premature aging disease called dyskeratosis congenita often have genetic defects in one of the three components of telomerase, producing a range of abnormalities, including in the skin, blood, and gastrointestinal tract. The deadliest defect is an inability to replenish the various types of blood cells, leading to early death from infection or bleeding. "We know that cells from these patients grow very poorly in culture compared to normal cells," says Inderjeet Dokal, a physician at Barts and The London School of Medicine and Dentistry, in London, who identified the first genes underlying the disease but was not involved in the new research. The disease, which is quite rare, has become of broader interest thanks to a growing focus on the science of telomeres and their role in aging.

In the new study, Suneet Agarwal, a physician and researcher at Children's Hospital, and collaborators took skin cells from three patients with the disease and genetically engineered the cells to express a set of genes that triggers reprogramming, reverting the cells to an embryonic state. They were surprised to find that the reprogrammed cells grew and divided, their telomeres lengthening with subsequent divisions.

"They show that they can make the cells young," says Lorenz Studer, a physician and scientist at Memorial Sloan-Kettering Cancer Center, in New York, who was not involved in the research. The defect in the telomerase enzyme "seems to be repressed or overridden during reprogramming, which probably explains why patients do reasonably well in the early stages of life," he says. "Patients still have same mutation whether in the [skin cell] or iPS cell, but the mutation only manifests itself in the differentiated cell."

The researchers found that reprogramming appeared to activate a specific component of the telomerase enzyme, a discovery that they hope to use to develop new treatments for this and other telomerase-related diseases. Agarwal hopes to search for drugs that boost the enzyme.

"This disease is an ideal case for the clinical application of telomere-rejuvenated adult stem cells or iPS cell therapies, because the primary defect of telomerase deficiency does not need to be corrected if telomerase function can be temporarily stimulated enough to elongate telomeres," wrote Kathleen Collins, a biologist at the University of California, Berkeley, in an e-mail. "This work shows that the iPS state does exactly that."

The findings are an early example of the potential of induced pluripotent cell reprogramming, a technology first developed in 2007 as a tool for studying human disease. The technique, in which genetic engineering or chemicals are used to activate genes normally expressed in embryonic cells, allows scientists to create stem cells from patients with different diseases. The hope is that differentiating these cells into the cell type affected by the disease will allow researchers to study the molecular mechanisms that cause it.

Studer points out that the new research could shed light on how the age or telomere status of a cell might affect how it manifests a particular disease. For example, it's not yet clear whether cells derived from patients with an age-related disease, such as Parkinson's or Alzheimer's, will show signs of the disease soon after reprogramming, or if the cells must age--cycling through a number of cell divisions--to more accurately reflect age-related ailments.

By Emily Singer

From Technology Review

Vaccines that Can Beat the Heat

Mixing virus-based vaccines with sugars and allowing them to dry on a simple filter can keep them stable for four months, even at tropical temperatures. The process, developed and tested by scientists at Nova Bio-Pharma Technologies and the University of Oxford, could provide an inexpensive way to streamline vaccine storage and delivery, reduce waste, and improve vaccine efficacy. "It is very simple," says Matt Cottingham, a virologist who worked on the project at Oxford.

Sugar shots: A rectangular cartridge contains a filter onto which a vaccine-sugar mixture has been dried at room temperature. The process keeps the vaccines stable even at tropical temperatures, offering a way to simply and less expensively ship them around the world.


The new technique could make vaccines cheaper and more accessible in areas lacking modern infrastructure. Existing live vaccines must be refrigerated between 4 °C to 8 °C to remain effective. In countries like England and the United States, maintaining this "cold chain" costs up to $200 million a year and increases the cost of vaccination by 14 percent to 20 percent, according to the World Health Organization. In poor countries, the refrigerated transports and even electricity at medical clinics is often missing altogether, making vaccination impossible.

Nova has previously shown that the technique can stabilize various types of vaccines, as well as protein-based drugs. The new study, published in today's issue of Science Translational Medicine, is the first time a live-virus vaccine has been kept potent after exposure to high temperatures.

The scientists used the technique on two viruses that are the basis for some of the latest vaccines in development. To make vaccines from these live viruses, the researchers disable the viruses so they can infect a cell in the body but not replicate, and then engineer them to carry genes for proteins from different disease organisms. This way, the viral vaccine will stimulate an immune response but won't make the recipient sick. The team at the Jenner Institute in Oxford, led by professor Adrian Hill, has pioneered the use of these viruses as the basis for vaccines against tuberculosis, malaria, and a "universal" flu vaccine, as well as for HIV. All of these are currently in clinical trials.

The viruses must remain alive in order to be effective, but they are sensitive to heat. Drying them in the sugar solution makes them less vulnerable. "This could be a really big breakthrough," says Stephanie James, director of science and director of the Grand Challenges in Global Health Initiative at the Foundation for the National Institutes of Health, which oversaw funding of the work. "These viral vaccine vectors are being seriously examined for the development of a lot of new vaccines to address disease problems that we don't have vaccines for yet."

The Oxford team showed that it could preserve the two vaccine viruses by mixing the viruses with sucrose--common table sugar--and trehalose, a sugar found in plants and mushrooms and used as a stabilizer in processed foods. The team then dripped the mixture onto a membrane made of glass fibers and dried it at room temperature in a low-humidity chamber. This allowed the sugars to form a noncrystalline solid around the fibers of the membrane, immobilizing the virus so that nothing could interact with it. Cottingham notes that neither using sugars nor drying are new ways to stabilize pharmaceuticals. "The crucial step is that the drying happens on the membrane, so we can remove the water at a relatively low temperature," he says. The exact concentrations of sugars and the type of filter used had to be established by trial and error. "There was a pretty big empirical element in all this," says Cottingham.

To release the vaccine, the researchers flushed the membranes with saline, which dissolves the sugar almost instantaneously. Based on tests done in mice, the team found that they could store the two different vaccines on sugar-stabilized membranes at a tropical 45 °C for as long as six months without any degradation. The vaccines could be kept for over a year and more at body temperature--37 °C--with only tiny losses in effectiveness.

Nova Bio-Pharma holds the patent on the drying technique, and it has also developed a small, plastic cartridge into which the filter is sealed. The cartridge has a hole at either end, one of which fits a sterile syringe and the other a disposable needle. When the vaccine is administered, a nurse or technician would pass sterile saline through the cartridge, pushing it out slowly. This instantly rehydrates the vaccine.

Samodh de Costa, the stabilization project manager at Nova, says the company already has an aseptic manufacturing process in place that can produce quantities that would be needed for clinical trials. The next steps are to show that the process can be scaled up to industrial manufacturing levels and demonstrate that it works with a standard or newly licensed human vaccine.

By Erika Jonietz

From Technology Review

Nanopillars Boost Solar Efficiency

Thin-film solar cells are less expensive than traditional photovoltaics sliced from wafers, but they're not as efficient at converting the energy in sunlight into electricity. Now a Newton, MA-based startup is developing a nanostructured design that overcomes one of the main constraints on the performance of thin-film solar cells. Solasta fabricates on arrays of nanopillars, rather than flat areas, boosting the efficiency of amorphous silicon solar cells to about 10 percent--still less than crystalline silicon panels, but more than the thin-film amorphous silicon panels on the market today. The company says that the design won't require new equipment or materials and that it will license its technology to amorphous-silicon manufacturers at the end of this year.

Pillar power: This microscope image shows the layers of a solar cell built on a nanopillar substrate. The core of each pillar is coated first with metal, then amorphous silicon, and then a transparent conductive oxide.


Solasta's solar architecture eliminates the tradeoff between thick and thin in thin-film solar cell design by separating the electrons' path from the photons' path. Light tends to reflect from thin-film cells without being absorbed. The thicker a cell's active layer, the more incident light it will collect, and the more free electrons it will generate. But the thicker the active layer, the fewer free electrons will make it out of the cell.

The cells designed by Solasta are built on a substrate forested with long, thin, vertically arrayed nanopillars. The pillars are coated first with metal, then with a thin layer of semiconducting material such as amorphous silicon, and then with a layer of transparent conductive oxide. Though the silicon layer is thin, a photon still has a relatively long path to travel down the length of the nanopillars, and a good chance of transferring its energy to an electron. Freed electrons then travel perpendicularly over a very short path to the metal at the core of each pillar, and shimmy down this electrical pole off the cell. "Electrons never have to travel through the photovoltaic material," says Zhifeng Ren, professor of physics at Boston College. "As soon as they're generated, they go into the metal." Ren founded Solasta with professors Michael Naughton and Krzysztof Kempa. Other groups are also attempting to increase the efficiencies of thin-film solar cells by creating nanostructures that provide separate paths for electrons and photons. But the advantage of Solasta's nanopillar substrate is that it's compatible with the manufacturing techniques used to make today's thin-film solar cells, which are mostly built on glass using chemical-vapor deposition techniques. "We'll license to existing thin-film manufacturers to get them an efficiency boost without having to switch out their equipment," says Mike Clary, CEO of Solasta. "Other people are working on nanostructured surfaces to improve the performance in one way or another, but there's nothing close to these efficiency levels or this close to commercialization," adds Clary.

So far, Solasta's prototyping has been done on small cells. In the coming months, the company will work on scaling up its cells to conventional thin-film sizes. The company is also testing different substrate materials, including polymer nanowires, to determine which material provides scalability to large areas while supporting the best efficiencies. Naughton, the company's chief technology officer, says the concept will work with any thin-film solar materials, but the company is focusing on amorphous silicon first.

The nanopillar architecture has another advantage in addition to efficiency when applied to amorphous silicon cells. "Amorphous silicon cells degrade in prolonged sunlight, reducing their efficiency by 20 to 30 percent," says Naughton. But this degradation is much less pronounced in cells thinner than about 100 nanometers, such as Solasta's, which should maintain their performance better over their lifetime.

The company will also develop the nanopillar architecture for new types of solar cells that take advantage of quantum phenomena at the nanoscale. The Boston College researchers recently demonstrated that ultrathin solar cells can allow "hot" electrons with very high energy levels to exit the cell. Even in thin cells, however, these electrons tend to lose their energy before they can escape. In the hope that the dual-path architecture of its nanopillars will solve this absorption problem, Solasta will work on developing nanopillar solar cells with ultrathin layers of silicon.

By Katherine Bourzac

From Technology Review

Particle May Be Leading Candidate for Mysterious Dark Matter

A 9-year search from a unique observatory in an old iron mine 2,000 feet underground has yielded two possible detections of weakly interacting massive particles, or WIMPs. But physicists, who include two University of Florida researchers, say there is about a one in four chance that the detections were merely background noise -- meaning that a worldwide hunt involving at least two dozen different observatories and hundreds of scientists will continue.

Closeup of a CDMS detector, made of crystal germanium.


"With one or two events, it's tough. The numbers are too small," said Tarek Saab, a UF assistant professor and one of dozens of physicists participating in the Cryogenic Dark Matter Search II, or CDMS II, experiment based in the Soudan mine in Northern Minnesota.

A paper about the results is set to appear February 11 in Science Express, the journal Science's Web site for selected papers that appear in advance of the print publication.

Scientists recognized decades ago that the rotational speed of galaxies and the behavior of galaxy clusters could not be explained by the traditional forces of gravity due to the mass of visible stars alone. Something else -- something invisible, undetectable yet extremely powerful -- had to exert the force required to cause the galaxies' more-rapid-than-expected rotational speed and similar anomalous observations.

What came to be known as "dark matter" -- dark because it neither reflects nor absorbs light in any form, visible or other -- is now estimated to comprise as much as 23 percent of the universe. But despite abundant evidence for its influence, no one has ever observed dark matter directly.

There are several possibilities for the composition of this mysterious, omnipresent matter. Particle physics theory points toward WIMPs as one of the most likely candidates.

WIMPs are "weakly interacting" because, although their masses are thought to be comparable to the masses of standard atomic nuclei, they have little or no effect on ordinary matter.

Among other things, that makes them extremely difficult to detect.

However, scientists believe WIMPs should occasionally "kick" or bounce off standard atomic nuclei, leaving behind a small amount of energy that should be possible to detect.

The CDMS II observatory is located a half-mile underground beneath rock that blocks most particles, such as those accompanying cosmic rays. At the observatory's heart are 30 hockey-puck-sized germanium and silicon detectors cryogenically frozen to negative 459.58 Fahrenheit, just shy of absolute zero. In theory, WIMPs would be among the few particles that make it all the way through the earth and rock. They would then occasionally kick the atoms on these detectors, generating a tiny amount of heat, a signal that would be observed and recorded on the experiment's computers.

Durdana Balakishiyeva, a postdoctoral associate in physics at UF, and Saab have participated in the analysis of data produced by the experiment as well as simulations of the detectors' response. Beginning in 2007 they have helped to test many of the detectors at the UF campus in Gainesville which are being used in the successor SuperCDMS experiment. The UF tests involved cooling and operating the detectors just as they are operated in Minnesota to verify that they were up to par.

The 15 institutions participating in CDMS II gathered data from 2003 to 2009. Observers recorded the two possible WIMP events in 2007, one on Aug. 8 and the second on Oct. 27. Scientists had estimated that five detections would be sufficient to confirm WIMPs -- meaning that the two fell short, according to the CDMS. But while the two detections may not be conclusive, they do help to set more stringent values on the WIMPs' interaction with subatomic particles.

"Up until now, not only us, but everybody was operating without statistics -- we were blind in that sense," Balakishiyeva said. "But now we can speak of statistics in some way."

At the very least, the finding helps to eliminate some theories about dark matter -- raising the profile of the WIMP and potentially accelerating the race to detect it.

"Many people believe we are extremely close -- not just us, but other experiments," Saab said. "It is expected or certainly hoped that in the next five years or so, someone will see a clear signal."

From sciencedaily.com