Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........

Hadron Smashes Through Door to New Era in Physics

The Large Hadron Collider at CERN, the European Organization for Nuclear Research, finally smashed two beams of protons together at high speed on Tuesday after a couple of failed starts.

CERN was trying to get the two proton beams to collide at an energy level of 7 trillion electron volts (TeV), but problems with the electrical system forced the scientists to reset the system and try again.
The experiment succeeded shortly after 1 p.m. central European summer time.
CERN will run the collider for 18 to 24 months.

Do the Proton Mash
The experiment had two proton beams traveling at more than 99.9 percent of the speed of light to collide, creating showers of new particles for physicists to study.

Each beam consisted of nearly 3,000 bunches of up to 100 billion particles each. The particles are tiny, so the chances of any two colliding are small. There will only be about 20 collisions among 200 billion particles, CERN said.

However, the continuous streaming of the beams into each other had bunches colliding about 30 million times a second, resulting in up to 600 million collisions a second.

The experiment ran for more than three hours and recorded half a million events, according to a tweet from CERN. Four huge detectors -- Alice, Atlas, CMS and LHCb -- observed the collisions and gathered data.
Physicists worldwide are examining the results. They're looking to solve the origin of mass,  the grand unification of forces and the presence of dark matter in the universe. 

Alice and the Kids

Alice will study the quark-gluon plasma, a state of matter that probably existed in the first moments of the universe. Quarks and gluons are the basic building blocks of matter. A quark-gluon plasma, also known as "quark soup," consists of almost free quarks and gluons. It exists at extremely high temperatures or extremely high densities or a combination of both.

CERN first tried to create quark-gluon plasma in the 1980s, and the results led it to announce indirect evidence for a new state of matter in 2000. The Brookhaven National Laboratory's Relativistic Heavy Collider is also working on creating quark-gluon plasma, and scientists there claim to have created the plasma at a temperature of about 4 million degrees Celsius. CERN's experiment with the LHC is a continuation of the work on this subject.

Atlas, the largest detector at the LHC, is a multipurpose detector that will help study fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

CMS is the heaviest detector at the LHC. It is a multipurpose detector consisting of several systems built around a powerful superconducting magnet.

LCHb will help scientists find out why the universe is all matter and contains almost no antimatter.

How LHC Works

The Large Hadron Collider is simply a machine for concentrating energy into a relatively small space.
In Tuesday's experiment, two proton beams were smashed together at a total energy level of 7 TeVs, the highest energies ever observed in laboratory conditions.

Here's how the collider works: After the beams are accelerated to 0.45 TeV in CERN's accelerator chain, they are injected into the LHC ring, where they'll make millions of circuits. On each circuit, the beams will get an energy impulse from an electric field. This will go on until they hit the 7 TeV level.

For some perspective on that 7 TeV figure, 1 TeV is roughly the amount of energy a flying mosquito puts out. Not much -- but then a proton is about a trillion times smaller than a mosquito.

At full power, each beam will have as much energy as a car traveling at about 995 miles per hour.
Some people feared the LHC would generate small black holes that might have some impact on the universe. Not to worry, Joseph Lykken, a particle physicist at the Fermi National Accelerator Laboratory in Batavia, Ill., told TechNewsWorld.

"It is indeed possible that LHC collisions could produce microscopic black holes, but these would be rare events, if they happened at all," he explained. "At most, we'd have one tiny black hole at a time, and each of these should be highly unstable."

If black holes could be produced and studied this way, it would revolutionize our understanding of space, time and gravity, Lykken pointed out.

Lykken, who's involved in the CMS experiments at CERN, said that detector was recording about 100 collisions per second, more than expected.

The CMS analysis teams have begun posting the first results from that data, he said.

Keeping the LHC Running

CERN will run the LHC for 18 to 24 months to get enough data to make significant advances throughout the science of physics. Once the scientists have discovered the Standard Model particles, they'll begin looking for the elusive Higgs boson.

The Standard Model of particle physics is a theory of three of the four known fundamental interactions and the elementary particles that take part in these interactions. These particles make up all visible matter in the universe. They are quarks, leptons and gauge bosons.

"This is only the beginning of a discovery process that will take months and years, as thousands of physicists analyze the debris from billions of collisions," Lykken said.

"The discovery of the Higgs boson will most likely take several years, but other discoveries could happen much sooner," he added.

"There's a lot of preliminary data that we'll be getting that will be useful in the sense that we can check predictions made about cross sections and multiplicities and charge particle production," said Brandeis University physics professor James Bensinger, who participated in the development of the Atlas Experiment on the LHC.

The search for the Higgs boson will probably have to wait until the next run, Bensinger told TechNewsWorld. The LHC will probably run for another 20 to 30 years, and will have cycles where it will run for two years and be shut off for the third for tweaking and maintenance.

America, Where Are You Now?

The United States had a supercollider project of its own under way, but the project was killed when the House of Representatives canceled funding for it in 1993.

By that time, 14 miles of tunnels had been dug and US$2 billion spent on it.
The laboratory site is close to the town of Waxahachie, Texas, south of Dallas. The site has been sold to an investment group that plans to turn it into a secure data storage center.

It's not likely that this supercollider project will be revived, Bensinger said.
"The LHC could probably do what that supercollider could do anyway," he pointed out.
If major discoveries result from the LHC experiment, however, U.S. scientists may be able to make the case for a new generation of colliders to explore new avenues of research, he suggested.
"The U.S. has a strong research and development program in place that could allow us to build a future supercollider using radically new technologies," Lykken pointed out. "It remains to be seen whether we have the political will to compete with the Europeans, who have devoted $10 billion to the successful LHC project." 

By Richard Adhikari

Robotic Planes Chase After Climate Data

For the first time, NASA has begun flying an unmanned aircraft outfitted with scientific instruments to observe the Earth's atmosphere in greater detail. The agency has partnered with Northrop Grumman to outfit three aircraft, called Global Hawks, which were given to NASA by the U.S. Air Force. Unlike manned aircraft equipped with Earth observation tools, the Global Hawks can fly for up to 30 hours and travel for longer distances and at high altitudes; they can also gather more precise data than satellites and can be stationed to monitor an area for extended periods of time.  

 Robo-plane: NASA and Northrop Grumman have developed this unmanned aircraft equipped with scientific instruments for earth science missions. Called Global Hawk, NASA acquired the airplane from the U.S Air Force and modified it to carry instruments to monitor the atmosphere more precisely than satellites can.

"There are certain types of atmospheric and earth science data that we are missing, even though we have things like satellites, manned aircraft, and surface-based networks," says Robbie Hood, director of the National Oceanic and Atmospheric Administration's (NOAA) Unmanned Aircraft Systems program. NOAA has formed an agreement with NASA to help construct the scientific instruments and guide the science missions for the Global Hawks. Hood will evaluate the aircraft to determine how they could be best used. For example, she says, they could fly over a hurricane to monitor its intensity changes or fly over the arctic to monitor sea ice changes in higher detail. 

The Global Hawks' first mission launched last week--an aircraft flew from NASA's Dryden Flight Research Center at Edwards Air Force Base in California over the Pacific Ocean. The project scientists will launch approximately one flight a week until the end of April. The drone is outfitted with 11 different instruments to take measurements and map aerosols and gases in the atmosphere, profile clouds, and gather meteorological data such as temperatures, winds, and pressures. It also has high-definition cameras to image the ocean colors. 

"The first mission is mostly a demonstration mission to prove the capabilities of the system," says Paul Newman, co-project scientist and an atmospheric physicist at NASA Goddard Space Flight Center in Greenbelt, MD. The aircraft will also fly under the Aura Satellite, a NASA satellite currently studying the Earth's ozone, air quality, and climate, to validate its measurements, making a comparison between its readings and what the new aircraft can do. "Satellites give you global coverage every day, but they can't see a region very precisely. The aircraft can give you regular observations and very fine resolution," says Newman.
The robotic airplanes operate completely autonomously--scientists program the plane prior to departure with the intended destinations, and the plane navigates itself. However, scientists can change the aircraft's flight path once in route or remotely pilot it in an emergency. Because a Global Hawk flight can last 30 hours (compared to 12 hours for a manned flight), the aircraft can travel to regions, such as the arctic, that are typically too dangerous for manned missions. 

Autonomous flier: The airplane’s first mission is to monitor the atmosphere above the Pacific Ocean. It can fly for up to 30 hours, reach an altitude of 19.8 kilometers, and travel a range of 22,800 kilometers.

NASA acquired the aircraft from the U.S. Air Force in 2007. They were originally developed for surveillance and reconnaissance missions. Now researchers are modifying them for their first extensive earth science missions. "We can get high resolution in situ measurements, and that is really the gold standard, and something that we have never before been able to do," says Randy Albertson, director of NASA's Airborne Science Program in the earth science division at Dryden. 

The instruments onboard for the first mission include: a LIDAR instrument that uses a laser pulse to measure the shape, size, and density of clouds and aerosols; a spectrograph that measures and maps pollutants like nitrogen dioxide, ozone, and aerosols; an ultraviolet photometer for ozone measurements; a gas chromatograph to calculate greenhouse gases; a handful of other instruments that can accurately measure atmospheric water vapor and ozone-depleting chlorofluorocarbons ; and high-definition cameras to image the ocean colors and learn about their biological processes. (See a full listing of payload here.) 

The researchers will also be able to sample parts of the atmosphere that they have not been able to reach or monitor for long durations--the upper troposphere and lower stratosphere. The aircraft can fly at an altitude of 19.812 kilometers and travel nearly 22,800 kilometers. That part of the atmosphere is "a crucial region that responds to and contributes to climate change at the surface, and we have come to realize that it is highly undersampled," says David Fahey, co-project scientist and a research physicist at NOAA's Earth Science Research Lab in Boulder, CO. "If you don't know what is going on in certain regions of the atmosphere, you will misinterpret what is going on at the surface." 

NASA and Northrop Grumman modified the aircraft to be a plug-in-play system, so that instruments can be easily taken off and new ones installed, depending on the mission. The plane can also be redesigned for a specific mission, if necessary. 

"The planes are really robotic satellite-aircraft hybrids that are going to revolutionize the way we do science," Newman says. The next mission will be to study hurricanes in the Caribbean, and will include a new suite of instruments for the planes. 

By Brittany Sauser 
From Technology Review

From Biomass to Chemicals in One Step

An early-stage company spun out of the University of Massachusetts, Amherst, plans to commercialize a catalytic process for converting cellulosic biomass into five of the chemicals found in gasoline. These chemicals are also used to make industrial polymers and solvents. Anellotech, which is seeking venture funding, plans to build a pilot plant next year.

Sawdust to gasoline: A process called catalytic pyrolysis converts biomass, such as sawdust, into valuable chemicals. From left to right: sawdust; sludge-like chemicals produced without the catalyst; the powder catalyst; the mixture of aromatic molecules made with the catalyst.

Anellotech's reactors perform a process called "catalytic pyrolysis," which converts three of the structural molecules found in plants--two forms of cellulose and the woody molecule lignin--into fuels. Ground-up biomass is fed into a high-temperature reactor and blended with a catalyst. The heat causes the cellulose, lignin, and other molecules in the biomass to chemically decompose through a process called pyrolysis; a catalyst helps control the chemical reactions, turning cellulose and lignin into a mix of carbon-ring-based molecules: benzene, toluene, and xylenes. 

The global market for this group of chemicals is $80 billion a year and growing at a rate of 4 percent a year, says Anellotech CEO David Sudolsky. "We're targeting to compete with oil priced at $60 a barrel, assuming no tax credits or subsidies," he says. The company's founder, George Huber, says his catalytic pyrolysis process can create 50 gallons of the chemicals per metric ton of wood or other biomass, with a yield of 40 percent. The other products of the reaction include coke, used to fuel the reactor.

"The advantage of pyrolysis is that it uses whole biomass," says John Regalbuto, an advisor to the Catalysis and Biocatalysis Program at the National Science Foundation. On average, lignin accounts for 40 percent of the energy stored in whole biomass. But because it can't be converted into sugars the way cellulose can, lignin can't be used as a feedstock for fermentation processes such as those used by some biofuels companies to convert sugarcane into fuels.

Pyrolysis is also different from gasification, another process for using whole biomass. Gasification results in a mixture of carbon and hydrogen called syngas, which can then be used to make fuel. Pyrolysis, by contrast, turns biomass into liquid fuels in a single step. And while gasification can only be done economically at a very large scale, says Regalbuto, catalytic pyrolysis could be done at smaller refineries distributed near the supply of biomass.

Pyrolysis is an efficient way to use biomass, but it's difficult to control the products of the reaction, and it's difficult to get high yields. The keys to Anellotech's process, says Huber, are a specially tailored catalyst and a reactor that allows good control over reaction conditions. Huber's group at UMass, where he is a professor of chemical engineering, was the first to develop a catalytic process for converting biomass directly into gasoline, and Anellotech's processes are based on this work.

So far, Huber has developed two generations of a reactor in the lab. In tests, the group starts with sawdust waste from a local mill. The ground-up biomass is fed into a fluidized bed reactor. Inside, a powdered solid catalyst swirls around in a mixture of gas heated to about 600 ºC. When wood enters the chamber, it rapidly breaks down, or pyrolyzes, into small unstable hydrocarbon molecules that diffuse into the pores of the catalyst particles. Inside the catalyst, the molecules are reformed to create a mixture of aromatic chemicals. The reaction process takes just under two minutes.

The company would not disclose details about the catalyst, but Huber says one of its most important properties is the size of its pores. "If the pores are too big, they get clogged with coke, and if they're too small, the reactants can't fit in," says Huber. The company's catalyst is a porous silicon and aluminum structure based on ZSM-5, a zeolite catalyst developed by Mobil Oil in 1975 and widely used in the petroleum refining industry. Sudolsky says that it can be made cheaply by contractors. Anellotech's reactors are very similar to those used to refine petroleum. But the company's reactors are designed to ensure rapid heat transfer and fluid dynamics that ensure that the reactants enter a catalyst before they turn into coke.

Stefan Czernik, a senior scientist at the National Renewable Energy Laboratory's National Bioenergy Center in Golden, CO, cautions that the process has so far only been demonstrated on a small scale, and the complexity of these reactors could mean a long road ahead for scaling them up. "It is not easy to replicate at a large scale the relationship between the chemical reaction and heat transfer as it's done in the laboratory," he says.

After demonstrating the process at a pilot plant next year, Anellotech hopes to partner with a chemical company to build a commercial scale facility in 2014. Sudolsky says the company will either license the catalytic pyrolysis process to other companies or build plants distributed near biomass sources, since transporting biomass is not economically viable. 

By Katherine Bourzac 
From Technology Review

Brain Maps for Stroke Treatment

After a stroke, the brain suffers more broadly than just at the spot that was starved of blood. New research, which uses brain imaging to examine connections between different parts of the brain, shows that communication between the left and right hemispheres is often disrupted; the greater the disruption, the more profound the patient's impairment in movement or vision. Researchers hope to use the approach to predict which patients are mostly likely to recover on their own and which will need the most intensive therapy.
The study is part of a broader effort to incorporate the brain mapping technology into post-stroke assessment, including new clinical trials testing experimental drugs and physical therapy in combination with imaging. Mapping brain connectivity and recovery may give scientists a better measure of which treatments most effectively enhance the brain's innate plasticity--its ability to rewire--and when the brain is best primed for repair. 

 Brain maps: Alex Carter (left) and Maurizio Corbetta have shown that a scanning approach that was originally developed to study brain organization can yield useful insights for clinical treatment of brain injury.

"The kind of information we're getting from neural imaging studies is giving us a better understanding of the kind of changes that are important during recovery," says Alexandre Carter, a neurologist at Washington University, in St. Louis, who led the study.

Stroke patients typically undergo an MRI to identify the precise location of their stroke. But these brain scans don't show how the damaged part of the brain fits into the larger network--the neural connections that feed into and out of this spot. Just as a delay at one station of a subway system can affect service at numerous stops and subway lines, dysfunction in a localized part of the brain disrupts activity in several different parts. 
In the new study, researchers assessed this disruption by creating a functional connectivity map of the brain in people who had recently suffered a stroke. They asked patients to lie quietly in an MRI machine and used functional MRI, an indirect measure of neural activity, to detect spontaneous fluctuations in brain activity. Brain areas that are well-connected will fluctuate in synchrony, providing an indirect way of mapping the brain's networks.

As is often the case with stroke, they found that patients' visual or motor problems were limited to just one side of the body, such as a weak left hand or an inability to pay attention to objects in the left side of the field of vision. (Because the left side of the brain typically controls the right side of the body and vice versa, a stroke on one side of the brain will affect the opposite side of the body.) But the researchers found that patients with these symptoms had disruptions in the connections between the two hemispheres. And the level of disruption between the two halves of the brain correlated to the severity of their impairment. "The physical damage has repercussions all throughout the network, like a ripple effect, even in areas that aren't physically damaged," says Carter.

The research, published this month in the Annals of Neurology, is the first step in a multiyear project assessing how to predict how well people will recover from stroke. Researchers will repeat the brain scanning and behavioral testing months after the patients' strokes to see how both change over time.

Carter and others ultimately aim to use the technology to better target stroke treatments. "It's important to know what lies behind recovery, because we want to have a brain-based understanding of new treatments," says James Rowe, a neuroscientist at Cambridge University, in the U.K., who was not involved in the study. In addition, he says, because this kind of scan can be done very early, "we might be able to classify patients who would benefit from one type of therapy or another." 

Two patients who have similar motor impairments might actually have very different disruptions to their brain networks and therefore benefit from different types of treatment. For example, not everyone responds to constraint-induced movement therapy, in which the strong arm is bound, forcing the patient to use their weak arm. Analysis of network dysfunction might help predict which patients will benefit from this treatment.

The research is part of a broader effort to capitalize on the inherent neural plasticity that is present even in the adult brain. "There is more and more interest in changes in the brain that occur at more chronic stages of stroke," says Rick Dijkhuizen, a neurobiologist at University Medical Center Utrecht, in the Netherlands, who was not involved in the current work. "Increasing evidence suggests that the brain is able to reorganize even in patients [whose strokes occurred a long time ago], and this gives us opportunities to look at stroke therapies to promote this organization."

By Emily Singer 
From Technology Review

A Bendable Heart Sensor

A new flexible and biocompatible electronic device can produce a more detailed picture of the electrical activity of a beating heart. This high-resolution electrical map could help improve the diagnosis and treatment of heart abnormalities by pinpointing areas of damage or misfiring circuitry.

Beat box: This implantable medical device bends and twists, thanks to transistors made of ultrathin ribbons of silicon. The electrode array shown here has 288 electrodes that can maintain contact with a heart even as it beats.

Today, the best way to map the electrical activity of a person's heart is to insert a probe tipped with a few electrical sensors through a vein and into the heart. The probe is used to measure activity at different locations in the tissue to slowly build up a picture of electrical activity. An electrocardiogram, which picks up signals from outside the body, offers a less precise picture. 

"It can take hours to map where these heart rhythms are coming from," says Brian Litt, a neurologist and biomedical engineer at the University of Pennsylvania, and one of the senior researchers on the project. "If you map at a very high resolution, it may be possible that you can pick up, in local areas, precursors to arrhythmias before they occur." 

The flexible device can be used to attach multiple sensors to the wall of a beating heart, measuring electrical activity at multiple sites despite the heart's movement. The electronics needed to record this activity are also built into the device, meaning more data can be collected. The new device is 25 microns thick and covers 1.5 square centimeters. It contains over 2,000 transistors sealed within a flexible coating and is covered with 288 sensor electrodes. So far the device has been tested successfully in pigs.
It is the first time that flexible electronics have been used in such high density in a medical device, says John Rogers, an engineer at the University of Illinois, Urbana-Champaign, who collaborated on the work. "These devices contain more transistors than any previously reported flexible device," he says.

The flexible device could be used in other kinds of biological sensors, says Litt, including devices for monitoring neurological conditions such a Parkinson's and epilepsy. The work, which also involved researchers from Northwestern University, is published in the journal Science Translational Medicine
The key to making the device is what Rogers calls "nanomembrane transistors." These components are made out thin ribbons of silicon, about 100 nanometers thick; on this scale the material loses its characteristic rigidity and becomes flexible. "It's much like a piece of two-by-four," Rogers says. "Wood is not very bendable, but a sheet of paper is." 

Making these transistors required a completely new fabrication technique. Rogers etched out the ribbon circuits from larger blocks of silicon and then used chemical etching to remove silicon from underneath. The circuits could then be peeled away when brought into contact with a stamp-like device. 

Once the circuit has been deposited on a substrate, it is encased in a photocurable water-tight epoxy material. This material was difficult to develop, since it had to have the same mechanical properties as the circuit in order to bend with it, but also needed to be resilient enough to prevent any seepage, even at points where the electrodes protrude. "It probably took us half a year to develop a recipe for that," says Rogers.
The next step, says Litt, is to build a power supply into the device so that it can be used for chronic implantation, and to find a way to transmit data from it wirelessly. The researchers are also developing a version that can also be used to ablate damaged heart tissue through localized heating.

"It is a very impressive advance for electrical mapping of the heart," says Eric Topol, a cardiologist and director of the Scripps Translational Science Institution, in La Jolla, CA. Today the average ablation procedure for arterial fibrillation takes about three hours at best. "This jump in mapping capability could markedly reduce and simplify these procedures and many other interventions," he says. 

By Duncan Graham-Rowe

From Technology Review

Individual Light Atoms, Such as Carbon and Oxygen, Identified With New Microscope

The ORNL images were obtained with a Z-contrast scanning transmission electron microscope (STEM). Individual atoms of carbon, boron, nitrogen and oxygen--all of which have low atomic numbers--were resolved on a single-layer boron nitride sample.
 Individual boron and nitrogen atoms are clearly distinguished by their intensity in this Z-contrast scanning electron transmission microscope image from Oak Ridge National Laboratory. Each single hexagonal ring of the boron-nitrogen structure, for instance the one marked by the green circle in the figure a, consists of three brighter nitrogen atoms and three darker boron atoms. The lower (b) image is corrected for distortion.

"This research marks the first instance in which every atom in a significant part of a non-periodic material has been imaged and chemically identified," said Materials Science and Technology Division researcher Stephen Pennycook. "It represents another accomplishment of the combined technologies of Z-contract STEM and aberration correction."

Pennycook and ORNL colleague Matthew Chisholm were joined by a team that includes Sokrates Pantelides, Mark Oxley and Timothy Pennycook of Vanderbilt University and ORNL; Valeria Nicolosi at United Kingdom's Oxford University; and Ondrej Krivanek, George Corbin, Niklas Dellby, Matt Murfitt, Chris Own and Zotlan Szilagyi of Nion Company, which designed and built the microscope. The team's Z-contrast STEM analysis is described in an article published March 25 in the journal Nature.
The new high-resolution imaging technique enables materials researchers to analyze, atom by atom, the molecular structure of experimental materials and discern structural defects in those materials. Defects introduced into a material--for example, the placement of an impurity atom or molecule in the material's structure--are often responsible for the material's properties.

The group analyzed a monolayer hexagonal boron nitride sample prepared at Oxford University and was able to find and identify three types of atomic substitutions--carbon atoms substituting for boron, carbon substituting for nitrogen and oxygen substituting for nitrogen. Boron, carbon, nitrogen and oxygen have atomic numbers--or Z values-- of five, six, seven and eight, respectively.
The annular dark field analysis experiments were performed on a 100-kilovolt Nion UltraSTEM microscope optimized for low-voltage operation at 60 kilovolts.

Aberration correction, in which distortions and artifacts caused by lens imperfections and environmental effects are computationally filtered and corrected, was conceived decades ago but only relatively recently made possible by advances in computing. Aided by the technology, ORNL's Electron Microscopy group set a resolution record in 2004 with the laboratory's 300-kilovolt STEM.
The recent advance comes at a much lower voltage, for a reason.

"Operating at 60 kilovolts allows us to avoid atom-displacement damage to the sample, which is encountered with low Z-value atoms above about 80 kilovolts," Pennycook said. "You could not perform this experiment with a 300-kilovolt STEM."

Armed with the high-resolution images, materials, chemical and nanoscience researchers and theorists can design more accurate computational simulations to predict the behavior of advanced materials, which are key to meeting research challenges that include energy storage and energy efficient technologies.
The research was funded by the DOE Office of Science

Google and Censorship

All eyes have been on Google's battle with the Chinese government since the company announced on Monday that it would no longer maintain its censored Chinese-language search site. Instead, the company began redirecting users of to its Hong Kong-based search service,, where it maintains unaltered Chinese-language search results.

However, China isn't the only front in Google's battle to protect its vision of an open Internet. When Google announced that it might cease operating in January, David Drummond, senior vice president of corporate development and the company's chief legal officer, wrote that "this information goes to the heart of a much bigger global debate about freedom of speech." 

"These issues are coming up all over the world," says Cynthia Wong, Plesser Fellow and staff attorney at the Center for Democracy and Technology, a nonprofit organization based in Washington, D.C., that promotes an open Internet. 

Wong says governments around the world are making policy decisions about material on the Internet--particularly when it comes to questions of child protection, copyright, and cyber attacks. She says it's tempting for them to enlist "technology intermediaries"--companies such as Google that host content or help users find information--to police what users can access. Because Google is so dominant in search and involved with many other Internet services, it often winds up at the center of these controversies, she notes.
Technology has shifted censorship from something that governments do to something that often requires participation from companies, says Ross Anderson, chair of the U.K.-based Foundation for Information Policy Research and a professor of security engineering at the University of Cambridge. 

Google faces pressure to censor content in many different countries. Inside Thailand it censors YouTube videos that mock the country's monarch. In Turkey it deletes videos that portray the country's founder, Mustafa Kemal Atatürk, as a homosexual. In France and Germany, Google abides by strict hate speech laws and censors content produced by extremist groups. And in India, it censors pornography and anything the government deems politically dangerous.

Google is involved in a number of squabbles over censorship. The company recently criticized the Australian government for a plan to introduce mandatory filtering for Internet service providers. Google says that the proposal goes too far. In addition to blocking child sexual abuse material (which Google already filters out of its search results worldwide), the company believes that the proposed Australian plan would block "socially and politically controversial material," such as information on safer drug use or euthanasia.
"This type of content may be unpleasant and unpalatable, but we believe that government should not have the right to block information which can inform debate of controversial issues," wrote Iarla Flynn, head of policy for Google Australia.

Elsewhere, governments are putting pressure on Google to police the content that users upload to its sites. In late February, three Google executives--Drummond, Peter Fleischer, global privacy counsel, and George Reyes, former CFO--were convicted of criminal charges in Italy for failure to comply with the Italian privacy code. The charges were brought in response to a video uploaded to YouTube. Google notes that these executives "did not appear in [the video], film it, upload it, or review it," and that the video was removed from the site "hours" after the Italian police notified the company. 
"In essence, this ruling means that employees of hosting platforms like Google Video are criminally responsible for content that users upload," wrote Matt Sucherman, Google's vice president and deputy general counsel for Europe, the Middle East, and Africa. 

Wong says that Google's battles in China, Italy, and Australia all ultimately threaten the company's ability to publish user-generated content, since liability for what users upload and access would mean needing to police it, which would be financially and legally difficult.

But Wong points to a key difference in China. Under U.S. and European laws, there are strong protections for companies that host or index content, she says. Because of this, in Italy, for example, Google can challenge the ruling through the legal system. In China, the situation is very different--every intermediary down the line can be held responsible for content, regardless of where it came from. That legal situation promotes self-censorship, she says.

Google is well-known for explaining its actions by an altruistic-sounding refrain: "What's good for the Internet is good for Google." But Evgeny Morozov, a Yahoo! fellow at Georgetown University's E.A. Walsh School of Foreign Service, notes that the anticensorship position the company has promoted is also directly related to the bottom line. 

If Google gets forced, by any country, into the position of having to police and restrict what content users can access, Morozov says, this draws the company into a morass of expense and liability. He believes Google has "a strong commercial interest" in maintaining a role as a simple intermediary, which lets it focus on developing its search and other money-making technologies.

Morozov says this also explains why Google is framing many of these free speech issues in terms of international trade. Especially since the company faces questions of censorship all over the world. "The governments are finally catching up with the Internet and they want to regulate it," Morozov adds. The question for Google is how well it can protect its stance on Internet freedom, wherever that battle is being fought. 

By Erica Naone
From Technology Review

Light Switches for Neurons

Just five years ago, scientists at Stanford University discovered that neurons injected with a photo-sensitive gene from algae could be turned on or off with the flip of a light switch. This discovery has since turned hundreds of labs onto the young field of optogenetics. Today researchers around the world are using these genetic light switches to control specific neurons in live animals, observing their roles in a growing array of brain functions and diseases, including memory, addiction, depression, Parkinson's disease, and spinal cord injury.

Lighting the way: New techniques in optogenetics make it possible to control individual neurons and whole brain circuits with pulses of light. Here, fiber optic wires deliver blue light to the reward centers of the mouse brain, conditioning mice to prefer one chamber over another.

Now Karl Deisseroth, one of the pioneers of the technique, has added some new tools to the optogenetic repertoire that may advance the study of such diseases, at light speed. A molecular technique that controls whole circuits of neurons rather than a single cell will allow scientists to study the role of specific neural networks in the brain. A new near-infrared technique that reaches brain cells in deep tissue could allow scientists to use these techniques noninvasively--currently, they must implant a fiber-optic cable into an animal's brain to deliver light activation to such cells. And an improved "off switch" that makes target neurons more sensitive to light allows for tighter neural control. The group published its results in the April 2 edition of the journal Cell. 

To date, scientists have mainly concentrated on two light switches, or opsins, to activate or inhibit neurons. The first, channelrhodopsin, is a protein found in the cell membranes of green algae. When exposed to blue light, these proteins open membrane channels, letting in sodium and calcium ions. When genetically engineered into mammalian neurons, these proteins cause similar ion influxes, activating neurons. The second light switch, an ion pump called halorhodopsin, lets in chloride ions in response to yellow light, silencing the neuron.
The halorhodopsin light switch has some drawbacks, however. It doesn't silence neurons all that effectively, and it can build up and have toxic effects in brain cells. Deisseroth's team has developed a more effective off switch by taking advantage of a phenomenon called "membrane trafficking." Instead of keeping halorhodopsin inside the cell, Deisseroth essentially engineered molecular instructions to guide the opsins through the cell, to the outside membrane, where it can more readily respond to light and open ion channels to inhibit neurons.
"Proteins get shipped around a cell and trafficked from spot to spot with incredible complexity," says Deisseroth. "We had to provide the equivalent of zip codes, bits of DNA on the opsins, to traffic them correctly to the surface of the membrane." (Ed Boyden, a neuroscientist at MIT and another optogenetics pioneer, has also developed more effective off switches, using proteins from fungi and bacteria.)
The team found that this new off switch is 20 times more responsive to yellow light than previous generations. The researchers also found that, while yellow seems to be the sweet spot along the light spectrum for triggering the off switch, red and near-infrared light can also have an effect. To Deisseroth, these results suggest a tantalizing prospect: it's well-known that the closer light gets to infrared, the deeper it can pass through tissue. Engineering a light switch that turns neurons on in response to infrared could open the doors to precise control of circuits deep in the brain, potentially enabling noninvasive treatments for diseases like Parkinson's and depression.

Jerry Silver, professor of neuroscience at Case Western University, is particularly excited about this new off switch. Silver is using optogenetics to explore bladder control in spinal cord injury, and has been using light to turn off nerves that relax the bladder. These nerves are located in the lower spine, an especially vulnerable area, and the current generation of halorhodopsins require a high intensity of light to get a measurable effect.
"We were worried that we'd need a lot of light, which creates a lot of heat," says Silver. "With these new tools that are more sensitive, we might not need as much light, which generates less heat, and the light can invade the tissue much farther, and that's why I'm so excited about this new generation."

In addition to engineering a powerful off switch, Deisseroth's team surmounted another major obstacle in optogenetics--activating a whole neural circuit. While scientists can genetically target light switches to specific types of neurons, it's more difficult to identify and genetically manipulate the cells downstream or upstream of those neurons. Being able to control whole circuits of neurons at a time, at light speed, could give scientists a better understanding of the neural connections involved in behavioral tasks like learning and memory, and diseases like depression and obsessive compulsive disorder.

To activate a neural circuit, Deisseroth first injected a genetic light switch into a motor neuron of a mouse. He manipulated the switch to only work in the presence of another molecule, CRE. Deisseroth injected CRE, along with a trafficking molecule, into another region of the mouse brain. The trafficking molecule "bodily drags CRE from cell to cell," tracing a route back to the target neuron, he says. The CRE unlocks the light switch, and in the presence of blue light, the neuron--and the entire circuit--is activated.
"We think that overcomes, or takes a step toward overcoming, the major remaining limitations of optogenetics," says Deisseroth. "The big challenge is to bring these tools to bear on disease models, and I see patients with autism and depression, and we'll look [to use these tools], and try to come to a circuit understanding of those diseases."

By Jennifer Chu
From Technology Review

Real-World Virtual Reality

Virtual reality (VR) products are now commonly used by many car and aeronautics companies, often to test product designs and simulate user interaction. VR head-mounted displays--from inside a user sees a 3-D environment--are also used in flight simulators. And custom-made VR devices are used for medical training and to help with patient recovery. The latest VR products were on display this week's at the IEEE's Virtual Reality 2010 conference in Waltham, MA.

Sensics, based in Columbia, MD, demonstrated a head-mounted display that provides a wider field of view for the wearer. Sensics's latest product, XSight, uses six displays to show a large, compound image to each eye. Larry Brown, president and founder of the company, says XSight, which costs $45,000 to $50,000, is mainly used by academics and engineers, and in military training.

Motion Analysis showed off its motion-tracking technology, used to help develop the motion-tracking technology behind the 3-D movie Avatar. The suit is covered with markers that emit light to special near-infrared cameras arranged around the wearer. The cameras create a computerized skeletal model that the system can then map to the virtual avatar. The user can also wear a 3-D head-mounted display to see her own virtual hands move in real-time in front of her in the virtual world, to an accuracy of around a millimeter. John Francis, VP of industrial and military sales at Motion Analysis, says the system lets an engineer explore a virtual model of an aircraft fighter. He adds that the technology is used in a number of military projects, as well as in medical, industrial, and animation settings. Last month, the company introduced a version of the technology for outdoor use.

Another company, CyberGlove, demoed a glove that captures and translates a user's finger movements with high precision. Users can interact with virtual objects in real-time in a natural way, says Nemer Velasquez, director of sales and marketing for the company. The glove is currently being used for training, defense, and engineering.

For video

By Kristina Grifantini 
From  Technology Review

Gene Holds Key to Embryonic Stem Cell Rejuvenation

WEDNESDAY, March 24 (HealthDay News) -- Scientists have identified a gene in mice that is a key player in what could essentially be called embryonic stem cells' "immortality." 

The finding could have a major impact on research into aging, regenerative medicine and the biology of stem cells and cancer, according to the report published online March 24 in the journal Nature.
Embryonic stem cells can develop into nearly any type of cell in the body and can produce infinite generations of new, fully operational embryonic stem cells (daughter cells). But the mechanism for this rejuvenation has been a mystery, the researchers noted in a news release from the U.S. National Institute on Aging (NIA).
In the new study, researchers at the NIA found that embryonic stem cells in mice express a unique Zscan4 gene that enables them to continuously produce vigorous daughter cells.

According to the study authors, this gene isn't turned on every time an embryonic stem cell replicates; only about 5 percent of embryonic stem cells will have the gene activated at any one point.


Nanotech robots deliver gene therapy through blood

CHICAGO (Reuters) – U.S. researchers have developed tiny nanoparticle robots that can travel through a patient's blood and into tumors where they deliver a therapy that turns off an important cancer gene.
The finding, reported in the journal Nature on Sunday, offers early proof that a new treatment approach called RNA interference or RNAi might work in people.
RNA stands for ribonucleic acid -- a chemical messenger that is emerging as a key player in the disease process.

Dozens of biotechnology and pharmaceutical companies including Alnylam, Merck, Pfizer, Novartis and Roche are looking for ways to manipulate RNA to block genes that make disease-causing proteins involved in cancer, blindness or AIDS.

But getting the treatment to the right target in the body has presented a challenge.
A team at the California Institute of Technology in Pasadena used nanotechnology -- the science of really small objects -- to create tiny polymer robots covered with a protein called transferrin that seek out a receptor or molecular doorway on many different types of tumors.
"This is the first study to be able to go in there and show it's doing its mechanism of action," said Mark Davis, a professor of chemical engineering, who led the study.

"We're excited about it because there is a lot of skepticism whenever any new technology comes in," said Davis, a consultant to privately held Calando Pharmaceuticals Inc, which is developing the therapy.
Other teams are using fats or lipids to deliver the therapy to the treatment target. Pfizer last week announced a deal with Canadian biotech Tekmira Pharmaceuticals Corp for this type of delivery vehicle for its RNAi drugs, joining Roche and Alnylam.

In the approach used by Davis and colleagues, once the particles find the cancer cell and get inside, they break down, releasing small interfering RNAs or siRNAs that block a gene that makes a cancer growth protein called ribonucleotide reductase.

"In the particle itself, we've built what we call a chemical sensor," Davis said in a telephone interview. "When it recognizes that it's gone inside the cell, it says OK, now it's time to disassemble and give off the RNA."
In a phase 1 clinical trial in patients with various types of tumors, the team gave doses of the targeted nanoparticles four times over 21 days in a 30-minute intravenous infusion.

Tumor samples taken from three people with melanoma showed the nanoparticles found their way inside tumor cells.

And they found evidence that the therapy had disabled ribonucleotide reductase, suggesting the RNA had done its job.

Davis could not say whether the therapy helped shrink tumors in the patients, but one patient did get a second cycle of treatment, suggesting it might be. Nor could he say if there were any safety concerns.
Davis said that part of the study will be presented at the American Society of Clinical Oncology meeting in June.


Laser security for the Internet

Now a new invention developed by Dr. Jacob Scheuer of Tel Aviv University's School of Electrical Engineering promises an information security system that can beat today's hackers -- and the hackers of the future -- with existing fiber optic and computer technology. Transmitting binary lock-and-key information in the form of light pulses, his device ensures that a shared key code can be unlocked by the sender and receiver, and absolutely nobody else. He will present his new findings to peers at the next laser and electro-optics conference this May at the Conference for Lasers and Electro-Optics (CLEO) in San Jose, California.
"When the RSA system for digital information security was introduced in the 1970s, the researchers who invented it predicted that their 200-bit key would take a billion years to crack," says Dr. Scheuer. "It was cracked five years ago. But it's still the most secure system for consumers to use today when shopping online or using a bank card. As computers become increasingly powerful, though, the idea of using the RSA system becomes more fragile."

Plugging a leak in a loophole
Dr. Sheuer says the solution lies in a new kind of system to keep prying eyes off secure information. "Rather than developing the lock or the key, we've developed a system which acts as a type of key bearer," he explains.
But how can a secure key be delivered over a non-secure network -- a necessary step to get a message from one user to another? If a hacker sees how a key is being sent through the system, that hacker could be in a position to take the key. Dr. Sheuer has found a way to transmit a binary code (the key bearer) in the form of 1s and 0s, but using light and lasers instead of numbers. "The trick," says Dr. Scheuer, "is for those at either end of the fiber optic link to send different laser signals they can distinguish between, but which look identical to an eavesdropper."

New laser is key
Dr. Scheuer developed his system using a special laser he invented, which can reach over 3,000 miles without any serious parts of the signal being lost. This approach makes it simpler and more reliable than quantum cryptography, a new technology that relies on the quantum properties of photons, explains Dr. Scheuer. With the right investment to test the theory, Dr. Scheuer says it is plausible and highly likely that the system he has built is not limited to any range on earth, even a round-the-world link, for international communications.
"We've already published the theoretical idea and now have developed a preliminary demonstration in my lab. Once both parties have the key they need, they could send information without any chance of detection. We were able to demonstrate that, if it's done right, the system could be absolutely secure. Even with a quantum computer of the future, a hacker couldn't decipher the key," Dr. Scheuer says.


New Findings About How Cells Achieve Eternal Life

The study, published in the April issue of the journal Aging Cell, was performed by a research team directed by Professor Göran Roos at the Department of Medical Bioscience, Pathology. It is about how cells' telomers (=repetitive DNA sequences on the ends of chromosomes) are regulated during the process that leads to eternal life of cells.

One type of blood cells, lymphocytes, were analyzed on repeated occasions during their cultivation in an incubator until they achieved the ability to grow an unlimited number of cell divisions, a process that is termed immortalization. In experiments, immortalization can be achieved following genetic manipulation of cells in various ways, but in the lymphocytes under study this occurred spontaneously. This is an unusual phenomenon that can be likened to the development of leukemia in humans, for example.

The ends of chromosomes, the telomers, are important for the genetic stability of our cells. In normal cells telomers are shortened with every cell division, and at a certain short telomer length they stop dividing. With the occurrence of genetic mutations the cells can continue to grow even though their telomers continue to be shortened. At a crticially short telomer length, however, a so-called crisis occurs, with imbalance in the genes and massive cell death. In rare cases the cells survive this crisis and become immortalized. In previous studies this transition from crisis to eternal life has been associated with the activation of telomerase, an enzyme complex that can lengthen cells' telomers and help stabilize the genes. A typical finding is also that cancer cells have active telomerase.

The current study shows that cells initially lose telomer length with each cell division, as expected, and after a while enter a crisis stage with massive cell death. Those cells that survive the crisis and become immortalized evince no activation of telomerase; instead, this happens later in the process. The Umeå researchers found that the expression of genes that inhibit telomerase is reduced in cells that get through the crisis, but telomerase was not activated until positively regulating factors were activated, thus allowing the telomers to become stabilized through lengthening. By analyzing the genetic expressions the scientists were able to show that the cells that survived the crisis stage had mutations in genes that are key to the repair of DNA damage and the regulation of growth and cell death. This discovery provides new insights into the series of events that needs to occur for cells to become immortalized, and it will have an impact on future studies of leukemia, for example.
The studies were carried out in collaboration with the Centre for Oncology and Applied Pharmacology, University of Glasgow and the Maria Skodowska-Curie Memorial Cancer Centre and Institute of Oncology, Warsaw.


Outwitting Influenza

When the H1N1 virus swelled into a pandemic last year, it seemed to defy the rules: Not only was it completely resistant to the seasonal flu vaccine, but it also seemed least dangerous to people over 65 years old--the very population that's usually most susceptible to influenza. Now, two new studies that take a look at the structure of the swine flu virus begin to explain why. And in the near future, they could also help inform vaccine development.

Shape shifter: Seasonal viruses typically have outer envelopes studded with sugar-attachment sites (blue) that help deflect immune attacks. But the pandemic influenza virus shown here--a composite of both 1918 and 2009 viruses--has three antibody binding sites (red) that lack sugars.

The research, published today in Science Express and Science Translational Medicine, offers both a structural and chemical close-up of the 2009 H1N1 virus. In fact, both studies examine hemagglutinin, a protein found on the surface of virus particles that activates the human immune system's protective response. The results reveal remarkable similarities between both pandemic-causing swine flu and 1918 Spanish flu viruses, two fast-spreading troublemakers separated by more than 90 years. 

Influenza has a well-earned reputation for speedy evolution--hemagglutinin mutations allow the virus to effectively evade the human immune response and reinfect the same population every year. But despite the near-century-long gap between the 1918 and the 2009 viruses, they have surprisingly similar hemagglutinin structures. In research led by Ian Wilson, a structural immunologist at the Scripps Research Institute in La Jolla, CA, x-ray crystallography shows that the two viruses share a near-identical binding site for the flu-fighting immune proteins called antibodies. Together with colleagues at Vanderbilt University, Wilson shows that an antibody isolated from someone who survived the 1918 pandemic was equally effective at attacking and neutralizing the 2009 virus.

"We looked at the site the antibodies respond to in the 1918 virus, and that site was completely conserved in the novel H1N1," Wilson says. "There are similarities between the 1918 influenza virus and the recent swine flu, at least at the level at which our immune system recognizes these different viruses."
The structural similarities explain why the 2009 H1N1 virus was having a reverse age-group effect. "We all realized we weren't seeing the mortality rates that we'd expect to see in elderly people, and now we know why. These viruses share a characteristic in their most important protein," says flu specialist Greg Poland, director of the Mayo Clinic's Vaccine Research Group, in Rochester, MN, who was not involved with the research. 

The second study, led by virologist Gary Nabel of the National Institute of Allergy and Infectious Diseases, also found striking similarities between the two pandemic viruses as well as one shared difference from the seasonal flu viruses that have been circulating for the last few decades. When Nabel and colleagues found that immunizing mice against the 1918 flu also protected them from the new 2009 virus--a result explained by Wilson's research--they, too, took a hard look at the viruses' structure.

Typically, one of the ways the influenza virus fends off antibody attacks is by using sugars called glycans as a shield, covering the hemagglutinin rather "like an umbrella," Nabel says. He and his colleagues found that both the 1918 and the 2009 H1N1 viruses lack glycans at the tip of their hemagglutinin proteins.
Viruses use these glycans mainly to hide from the human immune system; such measures are unnecessary in pigs and birds, which have shorter lifespans and tend not to be infected more than once. But in humans, adding sugars allows the virus to mutate and attack the same person multiple times. "Because humans live longer than one or two flu seasons, there's more pressure on the virus to evolve mechanisms to escape antibody response, and one way to do that is by acquiring glycans," says Richard Webby, who specializes in flu virology and ecology at St. Jude's Research Hospital in Memphis, TN. 

In the two pandemic viruses, the lack of glycans indicates a very recent jump from animals--so recent that they hadn't yet had time to evolve. That worries Nabel. He is concerned that as the 2009 virus morphs, attaching sugars to better evade detection, it will become a more dangerous flu. When he and his collaborators forced the virus to evolve in the laboratory, attaching glycans to the sites where none currently exist, they found that the resulting viruses were resistant to the current H1N1 vaccine. 

"That's a great concern, because it says that it's very likely this 2009 virus isn't going to stop dead in its tracks. It's going to find ways to outwit the human immune system," Nabel says. In fact, he points out, it already has. Four strains of H1N1 have now been found in Russia and China that indicate the addition of glycans has already occurred, just as the researchers predicted. 

But predicting evolution also means that it's possible to vaccinate against it. When the researchers immunized mice against their lab-evolved strain of H1N1, the mice generated an effective immune response. "So we actually have a way of trying to anticipate what the virus might do, and developing vaccines that would be effective against the change," Nabel says. 

"We change the vaccine every year because these viruses drift so frequently. And yet, an important element of this particular virus was conserved for 90 years," says the Mayo Clinic's Greg Poland. "Now you have a marker to try and understand viral evolution and how that plays out in terms of the human experience with that virus." 

Nabel and others believe it could also inform vaccination protocol. If the cause of H1N1's virulence was due to a lapse in our "herd" immunity, with too much time elapsed since the last time it circulated, the researchers propose that it might be worth considering regular vaccinations with prior pandemic strains--using history to predict viral evolution and inform vaccine development.

By Lauren Gravitz
From Technology Review

Nanotube RFID: Better Barcodes?

Radio-frequency identification (RFID) tags have made paying toll fees and public transit fares a breeze. But the tags, which are made of silicon, are still too expensive to replace ubiquitous barcodes to similarly speed up grocery store checkout lines by remotely scanning a product while it's still in the basket.
Cheap plastic RFID tags could soon change that. Researchers in Sunchon, South Korea, have printed RFID circuits on plastic films using a combination of industrial methods: roll-to-roll printing, ink-jet printing, and silicone rubber-stamping. They use inks containing various materials--silver, carbon nanotubes, and a nanoparticle-polymer hybrid--to deposit the circuit's components, such as capacitors and transistors, layer by layer.

 Roll up: Plastic RFID tags printed with a roll-to-roll process could replace barcodes if developers can get the price down to a penny or less.

Gyoujin Cho, a professor of printed electronics engineering at Sunchon National University, who led the work, estimates that the tags cost three cents apiece. To replace barcodes, RFID tags will need to cost a penny or less. But Cho says this should be achievable if all the layers on a tag can be deposited with a roll-to-roll process. A version of the current prototype that is capable of holding useful amounts of data should be on the market later this year, he says.
The new RFID tags will be the first product to use printed transistors made from carbon nanotubes. Researchers have been developing nanotube inks for a decade, but the only nanotube electronic product on the market so far is a film for display electrodes. Rick Jansen at carbon nanotube ink maker SouthWest NanoTechnologies says that good quality nanotube inks that are uniform and viscous enough to print have been costly to produce. 

Making transistors using nanotube ink is also hard because mixtures are typically two-thirds semiconducting and one-third metallic, and the metallic component makes the mixture conducting overall. Cho and researchers at Paru Corporation in Sunchon have patented a simple process to make nanotube inks semiconducting. They coat the metallic tubes in the solution with a polymer. "You shake them with certain polymers and wrap them up and you just leave them in," says Rice University chemistry professor James Tour, who was also involved in the new work. 

The resulting transistors are large and don't perform on par with silicon devices. But, says Tour, "RFID tags are a perfect application for them because you only need a handful of bits."
Making transistor arrays that control the pixels in a flexible display with nanotube ink would be more challenging. "With displays you need better transistors," he says. "We can print small transistors with carbon nanotube inks, but printing a large number of them with good alignment is hard." Nevertheless, Cho says, the Korean team is working on making display control circuits with their nanotube transistors.
Passive RFID tags, which are used to track objects, are made of two main parts: a silicon integrated circuit and an antenna that's typically made of solid copper or printable silver ink. The antenna coil captures AC power from the reader's radio frequency signal, and the AC power is converted into DC power at a rectifier circuit. Another circuit uses this power to generate the signals that are transmitted back to the reader, conveying the information stored on the tag.

Cho and his colleagues start by using a roll-to-roll process to deposit the antenna coils, a bottom electrode layer of silver ink, and a subsequent insulating layer, a barium titanate nanoparticle-polymer hybrid ink. Next, they put down layers of carbon nanotube inks using an ink-jet printer to make the circuit's transistors. Finally, they use a silicone rubber stamp to print the capacitors and diodes needed to make the RFID tag's rectifier circuit. They use an ink of cobalt-doped zinc oxide nanowires to make the semiconducting layer in the diode, and aluminum paste for the top electrodes. The researchers outline their process in the March issue of the journal IEEE Transactions on Electron Devices.

The finished tag is three times the size of a standard barcode, and it stores just one bit of information, a 1 or a 0, so it can only give a yes or no response to the reader. Cho says that a 64-bit tag should be available on the market next year. The final goal is a 96-bit tag to replace barcodes. 

"The real impact would be if they can compete in price," says Pulickel Ajayan, a mechanical engineering and materials science professor at Rice who wasn't involved in the work. "That's one of the reasons why nanotubes might come into play. It's a roll-to-roll process, which makes it feasible to get into the market."
Improving the resolution and accuracy of the roll-to-roll printer should give smaller tags that carry more information, Cho says. But they also need to improve the circuit so it emits higher power signals. The reader only works up to 10 centimeters away right now--not yet enough to work at a checkout line.

By Prachi Patel
From Technology Review

GE to Make Thin-Film Solar Panels

GE has confirmed long-standing speculation that it plans to make thin-film solar panels that use a cadmium- and tellurium-based semiconductor to capture light and convert it into electricity. The GE move could put pressure on the only major cadmium-telluride solar-panel maker, Tempe, AZ-based First Solar, which could drive down prices for solar panels.
Light materials: Cadmium telluride, a semiconductor that’s good at absorbing light, can be used to make inexpensive solar panels.

Last year, GE seemed to be getting out of the solar industry as it sold off crystalline-silicon solar-panel factories it had acquired in 2004. The company found that the market for such solar panels--which account for most of the solar panels sold worldwide--was too competitive for a relative newcomer, says Danielle Merfeld, GE's solar technology platform leader. 

She says cadmium-telluride solar is attractive to GE in part because, compared to silicon, there's still a lot to learn about the physics of cadmium telluride, which suggests it could be made more efficient, which in turn can lower the cost per watt of solar power. It's also potentially cheaper to make cadmium-telluride solar panels than it is to make silicon solar cells, making it easier to compete with established solar-panel makers. Merfeld says GE was encouraged by the example of First Solar, which has consistently undercut the prices of silicon solar panels--and because of this has quickly grown from producing almost no solar panels just a few years ago to being one of the world's largest solar manufacturers today. 

GE will work to improve upon cadmium-telluride solar panels originally developed by PrimeStar Solar, a spin-off of the Renewable Energy Laboratory in Golden, CO. GE acquired a minority stake in the company in 2007, and then a majority stake in 2008, but it didn't say much about its intentions for the company until last week, when it announced that it would focus its solar research and development on the startup's technology.
"It definitely makes sense that they would avoid silicon at this stage," says Sam Jaffe, a senior analyst at IDC Energy Insights in Framingham, MA. Especially in the last year, the market for silicon solar panels has been extremely competitive, with companies making little or no profit. "There's a lot more space to wring profits out of making cadmium telluride."

GE appears to be shying away from newer thin-film solar technology based on semiconductors made of copper, indium, gallium, and selenium (CIGS). Merfeld says that it is uncertain how well that material can perform at the larger sizes and volumes needed for commercial solar panels. Cadmium telluride is a simpler material that's much easier to work with than CIGS, which makes it easier to achieve useful efficiencies in mass-produced solar panels. 

Merfeld says GE hopes to compete with First Solar by offering higher performance solar cells and reducing the overall cost of solar power. In addition, its name recognition could encourage installers to buy its panels and could help secure financing of solar projects from banks. GE also has extensive distribution networks, especially for new construction, says Travis Bradford, president of the Prometheus Institute for Sustainable Development, a consultancy in Chicago.
Yet challenges remain. Tellurium is a rare material, so to keep its costs down, it will be important for GE to secure large supplies of tellurium rather than buying it on the open market, Jaffe says. He says having another large manufacturer of tellurium-based solar panels may make it necessary to discover new sources of the element.

What's more, First Solar has a large lead on GE in terms of its experience manufacturing cadmium telluride and finding ways to bring down prices. It could be challenging to even get close to First Solar's costs. "If GE wants to get into photovoltaics, the crystalline silicon boat already sailed," Bradford says. "The problem is that the thin-film boat may have as well, particularly for cadmium telluride."

By Kevin Bullis 
From Technology Review

HTML 5 Could Challenge Flash

Since it was introduced in the mid-'90s, Adobe's Flash has remained one of the most popular ways for developers to create animations, video and complex interactive features for the Web--regardless of what browser or operating system an end user is running. According to Adobe, which makes the Flash Player and various Flash development tools, 98 percent of Internet-connected desktop computers have Flash installed, and 95 percent have the most recent version, Flash Player 10.
In an effort to further push the adoption of Flash technology, yesterday Adobe released a new set of features for Flash, including a cloud-based service that lets developers connect applications to 14 different social networks through a single programming interface.
However, Flash's days of dominance may be numbered. Experts say there are two major threats: Apple's open hostility to the technology on its iPhone and iPad devices, and the rise of a new open Web standard called HTML 5, which seeks to make interactivity an integral part of all Web browsers. While Flash introduces extra capabilities to browsers after it is downloaded and installed, HTML 5 would ensure that similar functionality is included in browsers that adopted it as a standard by default, and it would not be controlled by a single company.
Although HTML 5 is designed to vastly extend a browser's abilities, including the handling of graphics and video, Adobe continues to release tools that keep Flash a step ahead. Its development tools also offer a simpler way to create rich Web content. For example, many social networking companies offer different interfaces of their own, and Adobe's new social-network service makes it easier for developers to tap into these.

However, the core strength of Flash--its ability to render graphics and animation in the browser--is coming under attack. At a panel discussion held last week at South by Southwest Interactive in Austin, TX, industry experts debated whether a key element of HTML 5 called Canvas could perform the same tasks for many developers. Canvas allows graphics, animation, and interactive features to run inside a browser without any additional plug-ins.

Ben Galbraith, who works on Palm's WebOS and has been involved with the community of open-source software developers responsible for the Mozilla Firefox browser, recently used Canvas to create a rich Web-based code-editing application. Galbraith and his collaborators had to build many components from scratch. "We had to do a lot of work, but we had great performance and control," he said at South by Southwest.
All this effort shows why Flash is still useful, said Chet Haase, who works on the software developer kit for Flex, a framework from Adobe that can be used to build sophisticated Web applications that run through the Flash player. Flex makes it simple for users to create and reuse visual features of an application, Haase said. Referring to the extensive work Galbraith put into his code editor, he joked: "I would love to reinvent a user interface tool kit every year. It gives a great opportunity to do a lot of interesting programming."
Nathan Germick, a Flash developer for social-game company WonderHill, agreed: "In terms of immediate access to an amazingly powerful tool set, there's really no contest."

But others argue that Canvas will soon have similar tools and libraries of its own. "Isn't it a matter of time?" Alon Salant, founder and owner of San Francisco-based software development firm Carbon Five, said at the panel.
The contest between HTML 5 and Flash is complicated by the issue of platform support, or lack of it. For example, Apple has so far shut Flash out of the iPhone and iPad, and it will take time for even Flash-friendly devices such as Android phones to reach the market with full support for Flash Player 10.1. Adobe is working to close this gap by releasing tools that repackage Flash applications into a format that can be submitted to Apple's app store.

HTML 5, on the other hand, has seen good support on mobile devices. Google recently used the iPhone's support for HTML 5 to make its Google Voice application available through the phone's browser after Apple rejected the application from its app store.
On the other hand, HTML 5 suffers from a notable lag in adoption--it doesn't work on Microsoft's Internet Explorer, which is still the most popular Web browser in the world. Though Microsoft recently announced that Internet Explorer 9 would support features from HTML 5, it's not yet clear whether the Canvas element will be included.

An important reason why Salant and Galbraith prefer HTML 5 is because the technology isn't proprietary. When an application is written using Canvas, other developers can use the browser's "view source" command to understand exactly how it works and learn from it, Salant noted. Galbraith added that developers don't have enough control over what features a proprietary platform such as Flash supports over time.

Adobe executives, when asked, reject talk of a showdown between HTML 5 and Flash. "The idea that they're competitive technologies doesn't make sense," says Adrian Ludwig, who was until recently group manager for Flash Platform product marketing at Adobe. He points out that support for HTML 5 is built into Adobe's AIR platform, which can be used to build Web applications that can run even without an Internet connection by storing some data locally. Internet applications have always been built using a mix of Web technologies, Ludwig says, and "Flash will continue to fill some of the gaps."

By Erica Naone
From Technology Review