Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........


Secrets of the Gecko Foot Help Robot Climb

Mark Cutkosky, the lead designer of the Stickybot, a professor of mechanical engineering and co-director of the Center for Design Research, has been collaborating with scientists around the nation for the last five years to build climbing robots.

 Paul Day and Alan Asbeck worked on adhesives for the feet of the gecko-like Stickybot.

After designing a robot that could conquer rough vertical surfaces such as brick walls and concrete, Cutkosky moved on to smooth surfaces such as glass and metal. He turned to the gecko for ideas.

"Unless you use suction cups, which are kind of slow and inefficient, the other solution out there is to use dry adhesion, which is the technique the gecko uses," Cutkosky said.

Wonders of the gecko toe
The toe of a gecko's foot contains hundreds of flap-like ridges called lamellae. On each ridge are millions of hairs called setae, which are 10 times thinner than a human's. Under a microscope, you can see that each hair divides into smaller strands called spatulae, making it look like a bundle of split ends. These split ends are so tiny (a few hundred nanometers) that they interact with the molecules of the climbing surface.

The interaction between the molecules of gecko toe hair and the wall is a molecular attraction called van der Waals force. A gecko can hang and support its whole weight on one toe by placing it on the glass and then pulling it back. It only sticks when you pull in one direction -- their toes are a kind of one-way adhesive, Cutkosky said.

"It's very different from Scotch tape or duct tape, where, if you press it on, you then have to peel it off. You can lightly brush a directional adhesive against the surface and then pull in a certain direction, and it sticks itself. But if you pull in a different direction, it comes right off without any effort," he said.

Robots with gecko feet
One-way adhesive is important for climbing because it requires little effort to attach and detach a robot's foot.
"Other adhesives are sort of like walking around with chewing gum on your feet: You have to press it into the surface and then you have to work to pull it off. But with directional adhesion, it's almost like you can sort of hook and unhook yourself from the surface," Cutkosky said.

After the breakthrough insight that direction matters, Cutkosky and his team began asking how to build artificial materials for robots that create the same effect. They came up with a rubber-like material with tiny polymer hairs made from a micro-scale mold.

The designers attach a layer of adhesive cut to the shape of Stickybot's four feet, which are about the size of a child's hand. As it steadily moves up the wall, the robot peels and sticks its feet to the surface with ease, resembling a mechanical lizard.

The newest versions of the adhesive, developed in 2009, have a two-layer system, similar to the gecko's lamellae and setae. The "hairs" are even smaller than the ones on the first version -- about 20 micrometers wide, which is five times thinner than a human hair. These versions support higher loads and allow Stickybot to climb surfaces such as wood paneling, painted metal and glass.

The material is strong and reusable, and leaves behind no residue or damage. Robots that scale vertical walls could be useful for accessing dangerous or hard to reach places.

Next steps
The team's new project involves scaling up the material for humans. A technology called Z-Man, which would allow humans to climb with gecko adhesive, is in the works.

Cutkosky and his team are also working on a Stickybot successor: one that turns in the middle of a climb. Because the adhesive only sticks in one direction, turning requires rotating the foot.

"The new Stickybot that we're working on right now has rotating ankles, which is also what geckos have," he said.

"Next time you see a gecko upside down or walking down a wall head first, look carefully at the back feet, they'll be turned around backward. They have to be; otherwise they'll fall."

Cutkosky has collaborated with scientists from Lewis & Clark College, the University of California-Berkeley, the University of Pennsylvania, Carnegie Mellon University and a robot-building company called Boston Dynamics. His project is funded by the National Science Foundation and the Defense Advanced Research Projects Agency.

From sciencedaily.com

Scientists Unveil Structure of Adenovirus, the Largest High-Resolution Complex Ever Found

The study was published in the journal Science on August 27, 2010.
"We learned a number of important things about the virus from the structure, including how its key contacts are involved in its assembly," said Scripps Research Professor Glen Nemerow, who, together with Scripps Research colleague Associate Professor Vijay Reddy, led the study. "That's very important if you want to reengineer the virus for gene therapy."

Scripps Research scientists have pieced together the structure of a human adenovirus (two views illustrated here).

"Even though a number of viral structures have been solved by x-ray crystallography, this is the biggest to date," said Reddy. "The adenovirus is 150 megadaltons, which contains roughly 1 million amino acids -- twice as big as PRD1, previously the largest virus ever solved to atomic resolution."

The Promise of Gene Therapy
First discovered in the 1950s, adenovirus is a major class of disease-causing agents that include viruses that causes the common cold. While the body is usually able to eventually fight off infection, infants and people with compromised immune systems can be susceptible to severe complications. No medications are currently available against adenoviruses, so current treatment focuses on managing symptoms.

While adenovirus has plagued humankind for millennia, more recently scientists have sought to exploit some of its properties -- such as its stability and ability to infect many different types of cells -- to engineer cures for other diseases. The hope is that modified adenovirus could play a role in gene therapy, used as a vector (carrier) for delivering therapeutic genes to the interior of cells.

"Adenovirus was used early on in pioneering gene therapy trials for the treatment of cystic fibrosis," said Nemerow. "Those trial failed because scientists didn't understand the biology and virus-host cell interactions to be able to use the virus properly."

Despite early setbacks, adenovirus is still being used in about 25 percent of human gene therapy trials, mostly for cancer and cardiovascular disease, according to Nemerow. A better understanding of the virus could help advance those efforts, which, if successful, could have a major impact on a host of conditions.

The Marathon Begins
So, in 1998 Nemerow and Reddy set out to determine the molecular structure of adenovirus -- with no idea it would take them 12 years to succeed.

The scientists turned to a technique known as x-ray crystallography, the gold standard of molecular structure determination for large complexes. In this method, scientists produce large quantities of a protein or virus then turn it into crystal form. The crystal is then placed in front of a beam of x-rays, which diffract when they strike the atoms in the crystal. Based on the pattern of diffraction, scientists can reconstruct the shape of the original molecule.

Several steps in this process, however, can be problematic.
The first challenge is producing crystals. Other scientists' attempts at crystallizing adenovirus had failed due to long, spoke-like fibers that stick out of the vertices of the complex. These fibers interfere with crystallization, which requires close packing of the particles.

The Nemerow lab, however, produced an adenovirus lacking these long spindles, sparking hope that these particles could be crystallized. But the scientists found that the new virus was somewhat unstable. In another attempt, the Nemerow lab produced a version of the virus that had short fibers rather than long ones.
"This was a 'best of both' situation," said Reddy. "We had a stable virus and short fibers so we could pack the particles to produce crystals."

However, the production of crystals also required high concentrations of protein in solution. This turned out to be a problem because at high concentrations the adenovirus clumped together, again becoming unstable. After trial and error with various chemical additives and buffers, eventually Nemerow and Reddy arrived at conditions that kept the virus solublized.

The scientists were able to produce crystals at last.
But when Nemerow and Reddy took the crystals to the Advanced Light Source synchrotron facility at Lawrence Berkeley National Laboratory, the crystals did not diffract.

Persevering in the Face of Adversity
What the scientists needed was higher quality crystals.
Nemerow and Reddy persevered, continuing to vary the conditions under which the crystals were formed in an attempt to produce diffraction-quality crystals. At this point, Reddy and Nemerow had the idea of turning to robotic crystallization -- at that time (around 2002) still somewhat of a novelty and a relatively expensive fee-for-service proposition.

"In the lab, we were limited by the dispensing of small volumes of sample," Reddy explained. "We could not pipette less than one microliter at a time. These robotic trials used very small -- 50 nanoliter -- trials. With these small drops, we could explore a large number of conditions. Crystals also grow faster in these small drops."

After a series of robotic trials, the scientists were able to identify conditions that would produce high-quality crystals. They then reproduced these conditions in the lab and manufactured enough crystals to make another trip to the Berkeley Lab synchrotron. There, Nemerow and Reddy found that the crystals diffracted -- but only to 10 angstroms, not enough to solve the structure.

Luckily, around that time (now 2005) a cutting-edge new synchrotron, the 23 ID-D beamline at Advanced Photon Source at Argonne National Laboratory, was coming online in Chicago. Reddy's mentor, Scripps Research Professor Jack Johnson, suggested they give that facility a try.

After traveling to Chicago with their samples, Nemerow and Reddy discovered that, using the new beamline, their adenovirus crystals indeed diffracted to 3-4Å resolution.

Toward the Finish Line
But the team's challenges were not over. The scientists found that the crystals deteriorated under the force of the x-rays, limiting amount of data that could be obtained from a single crystal.

"This is such a big assembly that there are people who think that in solution viruses in general are breathing entities," said Reddy. "In crystals you need to have perfect packing. Every molecule has to be in lock step. That's why any disruption can affect the packing of the virus -- and then you don't have diffraction."

Nemerow and Reddy continued to vary the conditions under which the crystals were grown and prepared to produce more robust crystals. The pair also made the realization that they had the most success with fresh crystals that had just completed their two-month growth cycle. The scientists started timing their trips to Chicago to coincide with the period when the crystals were freshest.

Nemerow and Reddy began travelling to Chicago at least three times a year, grateful that their hosts in Chicago were willing to provide generous use of the beamline. In total, the scientists collected data from nearly 1,000 crystals, about 10 percent of which diffracted.

"We used about 100 crystals to solve the structure," said Reddy. "That's nearly 20 million reflections. So we started calling ourselves 'millionaires'…"

As the data was collected from crystal after crystal, Reddy took on the task of merging the massive amounts of information to finally piece together a picture of the virus.

"Normally, more than one person does that work," Nemerow noted. "In this situation, Vijay was the one who did it all pretty much by himself."

A New Understanding
The picture of the virus that emerged from this work comprises the majority of the adenovirus. Small, elusive portions resisted placement, probably because these regions of the virus are dynamic, essentially moving too much to assign a definitive location.

The new structure provides information on where the weakest links in the viral assembly are, as well as where the strength of the virus lies.

"The virus has to be stable to survive in the environment, but also has to be able to disassemble when it enters the cell," said Nemerow. "So the structure revealed places in the virus that are loosely bound so they can come apart upon cell entry."

Nemerow notes that it might be possible to use this information to develop drugs, for example compounds that prevent the virus from disassembling and thus from infecting cells. He is also excited about the potential of this new information for use in gene therapy.

"Like an electrical engineer who wants to go into a house and put in new wiring, [a genetic engineer] doesn't want to put something where there is another important contact," he said. "Knowing where these important regions are in the virus is crucial."

The scientists note that the adenovirus structure also revealed unanticipated changes in some of the key proteins involved in receptor interactions, highlighting the plasticity of these parts of the assembly.

In future research, the team plans to focus on better understanding the portion of the virus that resisted characterization in this study, as well as on comparing different types of adenovirus structures.

In addition to Nemerow and Reddy, authors of the paper were Professor Phoebe L. Stewart of Vanderbilt University Medical Center and Research Associate S. Kundhavai Natchiar of Scripps Research.

The early phases of the project were funded by Novartis/GTI and later phases by the National Institutes of Health.. The use of the Advance Photon Source was supported by the U.S. Department of Energy.

From sciencedaily.com

Tiny Logo Demonstrates Advanced Display Technology Using Nano-Thin Metal Sheets

The gratings, sliced into metal-dielectric-metal stacks, act as resonators. They trap and transmit light of a particular color, or wavelength, said Jay Guo, an associate professor in the Department of Electrical Engineering and Computer Science. A dielectric is a material that does not conduct electricity.

"Simply by changing the space between the slits, we can generate different colors," Guo said. "Through nanostructuring, we can render white light any color."

 An optical microscopy image of a 12-by-9-micron U-M logo produced with this new color filter process.

A paper on the research is published Aug. 24 in Nature Communications.
His team used this technique to make what they believes is the smallest color U-M logo. At about 12-by-9 microns, it's about 1/6 the width of a human hair.

Conventional LCDs, or liquid crystal displays, are inefficient and manufacturing-intensive to produce. Only about 5 percent of their back-light travels through them and reaches our eyes, Guo said. They contain two layers of polarizers, a color filter sheet, and two layers of electrode-laced glass in addition to the liquid crystal layer. Chemical colorants for red, green and blue pixel components must be patterned in different regions on the screen in separate steps.

Guo's color filter acts as a polarizer simultaneously, eliminating the need for additional polarizer layers. In Guo's displays, reflected light could be recycled to save much of the light that would otherwise be wasted.
Because these new displays contain fewer layers, they would be simpler to manufacture, Guo said. The new color filters contain just three layers: two metal sheets sandwiching a dielectric. Red, green and blue pixel components could be made in one step by cutting arrays of slits in the stack. This structure is also more robust and can endure higher- powered light.

Red light emanates from slits set around 360 nanometers apart; green from those about 270 nanometers apart and blue from those approximately 225 nanometers apart. The differently spaced gratings essentially catch different wavelengths of light and resonantly transmit through the stacks.

"Amazingly, we found that even a few slits can already produce well-defined color, which shows its potential for extremely high-resolution display and spectral imaging," Guo said.

The pixels in Guo's displays are about an order of magnitude smaller than those on a typical computer screen. They're about eight times smaller than the pixels on the iPhone 4, which are about 78 microns. He envisions that this pixel size could make this technology useful in projection displays, as well as wearable, bendable or extremely compact displays.

The paper is called "Plasmonic nano-resonators for high resolution color filtering and spectral imaging."
Guo is also an associate professor in the Department of Macromolecular Science and Engineering. This research is supported in part by the Air Force Office of Scientific Research and the Defense Advanced Research Projects Agency. The university is pursuing patent protection for the intellectual property and is seeking commercialization partners to help bring the technology to market.

From sciencedaily.com

The Great Vanishing Oil Spill

Microbes may become the heroes of the Gulf of Mexico oil spill by gobbling up oil more rapidly than anyone expected. Now some experts suggest we ought to artificially stimulate such microbes in stricken marshland areas to aid their cleanup.

Evidence published this week shows that deep-water microbes in the Gulf may be rapidly chewing up BP's spilled crude. This could sway federal authorities to use petroleum-digesting microbes or fertilizer additives that can stimulate naturally occurring bacteria for future spills. Such measures were originally rejected for the BP spill.

 Crude crunching: Rod-shaped Oceanospirillales bacteria feed on a droplet of crude oil in this sample from a deep oil plume near BP's spill. The bacteria was collected by researchers from the Lawrence Berkeley Laboratory.

Ralph Portier, a marine toxicologist at Louisiana State University, says the EPA approves of such measures in general, but they weren't approved for the Gulf spill because it was thought they wouldn't be necessary--a presumption that now appears to be correct.

Oil has disappeared from the Gulf's surface waters since BP capped its blown-out well on August 15. Yet most of the estimated 4.9 million barrels of oil are unaccounted for. Some of BP's oil, however, has reached more than 100 miles of sensitive Gulf marsh, and may remain lodged deep within sediments for years.

Portier says cleanup authorities are following a 2001 federal position paper arguing that stimulating biodegradation was unnecessary in the Gulf ecosystem. The Gulf already harbors microbes adapted to degrading the region's naturally occurring underwater petroleum seeps, the federal paper said.

Microbial ecologist Terry Hazen, a bioremediation expert at the U.S. Department of Energy's Lawrence Berkeley Laboratory, says that this reasoning is correct for dispersed oil. Hazen led a team that identified a strain of microbes rapidly breaking down oil at a depth of 1,100 meters and icy temperatures as low as 5 °C--conditions where biodegradation is expected to proceed slowly. The research appears this week in the journal Science.

Hazen's team examined one of several plumes of oil droplets emanating from BP's blowout, and observed rod-shaped bacteria feasting on the 10- to 60-micrometer droplets fast enough to halve the oil every two to six days. That rate contradicts a study of the same plume conducted by the Woods Hole Oceanographic Institution, and published in Science earlier this month, that found meager oxygen consumption (to be expected where large numbers of microbes are consuming oil) and concluded that the oil was therefore not being broken down.

The divergence, according to Hazen, is explained by the thin concentration of the oil. While immense, stretching over 35 kilometers, the oil in the studied plume maxed out at 10 parts-per-million. Oxygen depletion by microbes would thus be negligible, argues Hazen.

In recent weeks, Hazen's group has detected no oil, although it's possible the oil could simply have been carried out of view by Gulf currents. Federal incident commander Thad Allen told Technology Review on Wednesday that he needs a more rigorous measurement program to "get a handle on what is actually out there in the water."

Hazen bets that the dispersed oil has indeed broken down, and says some credit for that goes to the 1.84 million gallons of dispersant sprayed on the spilling oil as part of the cleanup operation. This dispersant also likely acted as a bioremediation agent because the tiny droplets it created gave microbes more surface area to chew at. 

Natural microbial activity, however, may fall short in marsh sediments where oil is concentrated and the supply of oxygen and nutrients is constrained, slowing microbial digestion to a crawl.

Artificial enhancements could speed up marsh recovery, says Portier. He says LSU scientists showed this to be true three years ago in a marsh in Lake Charles, LA, that was contaminated with heavy crude. They bolstered the marsh microbial community with an LSU-developed culture of oil-eating marsh microbes, along with diluted fertilizer. After 72 days, untreated sites still harbored more than half of their spilled oil, says Portier, whereas treated sites were clean enough to meet strict federal risk levels for residential areas.

Portier says he is working with a BP-backed research coalition of universities and state agencies that expects to have field trials underway in Gulf marshes next month.

By Peter Fairley
From Technology Review

Human Neural Stem Cells Restore Motor Function in Mice With Chronic Spinal Cord Injury

Previous breakthrough stem cell studies have focused on the acute, or early, phase of spinal cord injury, a period of up to a few weeks after the initial trauma when drug treatments can lead to some functional recovery.

The UCI study, led by Aileen Anderson and Brian Cummings of the Sue and Bill Gross Stem Cell Research Center, is significant because the therapy can restore mobility during the later chronic phase, the period after spinal cord injury in which inflammation has stabilized and recovery has reached a plateau. There are no drug treatments to help restore function in such cases.

 Human neural stem cells transplanted into mice grew into neural tissue cells, such as oligodendrocytes.

The study appears in the open-access, peer-reviewed journal PLoS ONE and is available online.
The Anderson-Cummings team transplanted human neural stem cells into mice 30 days after a spinal cord injury caused hind-limb paralysis. The cells then differentiated into neural tissue cells, such as oligodendrocytes and early neurons, and migrated to spinal cord injury sites. Three months after initial treatment, the mice demonstrated significant and persistent recovery of walking ability in two separate tests of motor function when compared to control groups.

"Human neural stem cells are a novel therapeutic approach that holds much promise for spinal cord injury," said Anderson, associate professor of physical medicine & rehabilitation and anatomy & neurobiology at UCI. "This study builds on the extensive work we previously published in the acute phase of injury and offers additional hope to those who are paralyzed or have impaired motor function."

"About 1.3 million individuals in the U.S. are living with chronic spinal cord injury," added Cummings, associate professor of physical medicine & rehabilitation and anatomy & neurobiology. "This latest study provides additional evidence that human neural stem cells may be a viable treatment approach for them."

The research is the latest in a series of collaborative studies conducted since 2002 with StemCells Inc. that have focused on the use of StemCells' human neural stem cells in spinal cord injury and resulted in multiple co-authored publications. StemCells Inc., based in Palo Alto, Calif., is engaged in the research, development and commercialization of stem cell therapeutics and tools for use in stem cell-based research and drug discovery.

According to Dr. Stephen Huhn, vice president and head of the central nervous system program at StemCells Inc., "the strong preclinical data we have accumulated to date will enable our transition to a clinical trial, which we plan to initiate in 2011."

Desirée Salazar of UC San Diego, Nobuko Uchida of StemCells Inc. and Frank P.T. Hamers of the Tolbrug Rehabilitation Centre in 's-Hertogenbosch, Netherlands, also contributed to the study, which received National Institutes of Health and California Institute for Regenerative Medicine support.

From sciencedaily.com

Nanotech Yields Major Advance in Heat Transfer, Cooling Technologies

These coatings can remove heat four times faster than the same materials before they are coated, using inexpensive materials and application procedures.

The discovery has the potential to revolutionize cooling technology, experts say.
The findings have just been announced in the International Journal of Heat and Mass Transfer, and a patent application has been filed.

 This nanoscale-level coating of zinc oxide on top of a copper plate holds the potential to dramatically increase heat transfer characteristics and lead to a revolution in heating and cooling technology, according to experts at Oregon State University and the Pacific Northwest National Laboratory.

"For the configurations we investigated, this approach achieves heat transfer approaching theoretical maximums," said Terry Hendricks, the project leader from the Pacific Northwest National Laboratory. "This is quite significant."

The improvement in heat transfer achieved by modifying surfaces at the nanoscale has possible applications in both micro- and macro-scale industrial systems, researchers said. The coatings produced a "heat transfer coefficient" 10 times higher than uncoated surfaces.

Heat exchange has been a significant issue in many mechanical devices since the Industrial Revolution.
The radiator and circulating water in an automobile engine exist to address this problem. Heat exchangers are what make modern air conditioners or refrigerators function, and inadequate cooling is a limiting factor for many advanced technology applications, ranging from laptop computers to advanced radar systems.

"Many electronic devices need to remove a lot of heat quickly, and that's always been difficult to do," said Chih-hung Chang, an associate professor in the School of Chemical, Biological and Environmental Engineering at Oregon State University. "This combination of a nanostructure on top of a microstructure has the potential for heat transfer that's much more efficient than anything we've had before."

There's enough inefficiency in heat transfer, for instance, that for water to reach its boiling point of 100 degrees centigrade, the temperature of adjacent plates often has to be about 140 degrees centigrade. But with this new approach, through both their temperature and a nanostructure that literally encourages bubble development, water will boil when similar plates are only about 120 degrees centigrade.

To do this, heat transfer surfaces are coated with a nanostructured application of zinc oxide, which in this usage develops a multi-textured surface that looks almost like flowers, and has extra shapes and capillary forces that encourage bubble formation and rapid, efficient replenishment of active boiling sites.

In these experiments, water was used, but other liquids with different or even better cooling characteristics could be used as well, the researchers said. The coating of zinc oxide on aluminum and copper substrates is inexpensive and could affordably be applied to large areas.

Because of that, this technology has the potential not only to address cooling problems in advanced electronics, the scientists said, but also could be used in more conventional heating, cooling and air conditioning applications. It could eventually find its way into everything from a short-pulse laser to a home air conditioner or more efficient heat pump systems. Military electronic applications that use large amounts of power are also likely, researchers said.

The research has been supported by the Army Research Laboratory. Further studies are being continued to develop broader commercial applications, researchers said.

"These results suggest the possibility of many types of selectively engineered, nanostructured patterns to enhance boiling behavior using low cost solution chemistries and processes," the scientists wrote in their study. "As solution processes, these microreactor-assisted, nanomaterial deposition approaches are less expensive than carbon nanotube approaches, and more importantly, processing temperatures are low."

From sciencedaily.com

A New Kind of Microchip

A computer chip that performs calculations using probabilities, instead of binary logic, could accelerate everything from online banking systems to the flash memory in smart phones and other gadgets.

Rewriting some fundamental features of computer chips, Lyric Semiconductor has unveiled its first "probability processor," a silicon chip that computes with electrical signals that represent chances, not digital 1s and 0s.

 Probability chip: This computer chip uses signals representing probabilities, not digital bits.

"We've essentially started from scratch," says Ben Vigoda, CEO and founder of the Boston-based startup. Vigoda's PhD thesis underpins the company's technology. Starting from scratch makes it possible to implement statistical calculations in a simpler, more power efficient way, he says. 

And because that kind of math is at the core of many products, there are many potential applications. "To take one example, Amazon's recommendations to you are based on probability," says Vigoda. "Any time you buy [from] them, the fraud check on your credit card is also probability [based], and when they e-mail your confirmation, it passes through a spam filter that also uses probability."

All those examples involve comparing different data to find the most likely fit. Implementing the math needed to do this is simpler with a chip that works with probabilities, says Vigoda, allowing smaller chips to do the same job at a faster rate. A processor that dramatically speeds up such probability-based calculations could find all kinds of uses. But Lyric will face challenges in proving the reliability and scalability of its product, and in showing that it can be easily programmed.

The electrical signals inside Lyric's chips represent probabilities, instead of 1s and 0s. While the transistors of conventional chips are arranged into components called digital NAND gates, which can be used to implement all possible digital logic functions, those in a probability processor make building blocks known as Bayesian NAND gates. Bayesian probability is a field of mathematics named after the eighteenth century English statistician Thomas Bayes, who developed the early ideas on which it is based.

Whereas a conventional NAND gate outputs a "1" if neither of its inputs match, the output of a Bayesian NAND gate represents the odds that the two input probabilities match. This makes it possible to perform calculations that use probabilities as their input and output.

Lyric has been working on its technology in stealth mode since 2006, partly with funding from the U.S. Defense Advanced Research Projects Agency. DARPA is interested in potential defense applications that would involve working with information that isn't clear cut--for example, radio signals distorted accidentally or otherwise, and machine vision systems that try to recognize actions or objects in images. "They're interested in some James Bond-type applications," says Vigoda.

Within three years, Lyric plans to produce prototypes of a general-purpose probability processor, dubbed GP5, capable of being programmed to take on any statistical task. But Lyric's first chip, on offer to companies using flash memory in devices and products available to license from this week, is targeted at boosting the efficiency, and ultimately the size, of the solid state flash memory at the heart of portable gadgets like smart phones and tablets.

Flash memory chips store data using areas of charge trapped on their surface. But those clumps are unstable and even small changes in charge can affect the integrity of the stored data. "The difference between a 0 and a 1 is just 100 electrons," says Vigoda. "Today, one in every 1,000 bits is wrong when it is read out, and in the next generation, the number of errors will approach one bit in every hundred."

Error-checking chips can correct those errors by drawing on a unique code generated every time data is written to the chip. This checksum can be used to confirm whether the stored data has changed, and makes it possible to calculate which bits have flipped from 1s to 0s or vice versa. This requires the kind of statistical calculation that is difficult to implement in digital logic, says Vigoda, but is ideal for Lyric's approach. 

The firm has been testing a probability chip's ability to perform error checks with one of the largest flash memory manufacturers. Compared to a typical error-correction chip used today, Lyric's chip takes up just a 30th of the space and uses a 12th of the energy. "We hope you'll be walking around with this in your pocket within two years," says Vigoda.

Error checking is becoming a bottleneck for flash performance and capacity, says Steven Swanson, a computer scientist at University of California, San Diego who studies the performance of flash chips. Hard disks, based on spinning magnetic platters, to a large extent owe their gains in capacity over recent years to advanced error checking being built in, says Swanson. "Compared to [hard disks], we're still in the early days for flash," says Swanson, "and it is pretty clear that as flash gets denser, error checking will become more important."

Although Lyric's probability chips could connect to conventional electronics, they work in a fundamentally different way, which may create some speed bumps in the minds of engineers, says Swanson. "As a flash engineer, I might find myself wanting to test more of these unusual chips than I would a conventional one to convince myself they are reliable."

By Tom Simonite
From Technology Review

An Implantable Antenna

Silk and gold, usually a pairing for the runways of Milan, are now the main ingredients for a new kind of implantable biosensor. Researchers at Tufts University have crafted a small antenna from liquid silk and micropatterned gold. The antenna is designed to spot specific proteins and chemicals in the body, and alert doctors wirelessly to signs of disease. Scientists say the implant could someday help patients with diabetes track their glucose levels without having to test themselves daily.

Silky sensor: A biosensor made from silk and gold can pick up tiny signals from proteins and chemicals in the body. Researchers patterned gold over a silk film and wrapped it into a capsule shape to form a small antenna.

According to Fiorenzo Omenetto, professor of biomedical engineering at Tufts University, silk is a natural platform for medical implants--it's biocompatible, and while it's delicate and pliable, it's also tougher than Kevlar. Implanted in the body, silk can conform to any tissue surface, and, unlike conventional polymer-based implants, it could stay in place over a long period of time without adverse effects. Omenetto has previously taken advantage of these properties to mold silk into tiny chips and flexible meshes, pairing the material with transistors to track molecules, and with electrodes to monitor brain activity.

Now Omenetto is exploring the combination of silk and metamaterials--metals like gold, copper, and silver manipulated at the micro- and nanoscale to exhibit electromagnetic characteristics not normally found in nature. For example, scientists have created metamaterials that act as "invisibility cloaks" by manipulating metals to bend light all the way around an object, rendering it invisible.

Omenetto and his colleague Richard Averitt, associate professor of physics at Boston University, used similar principles to create a metamaterial that's responsive not to visible light, but rather to frequencies further down the electromagnetic spectrum, within the terahertz range. Not coincidentally, proteins, enzymes, and chemicals in the body are naturally resonant at terahertz frequencies, and, according to Averitt, each biological agent has its own terahertz "signature."

Terahertz science is a new and growing field, and several research groups are investigating specific protein "T-ray" signatures. A silk metamaterial antenna could someday pick up these specific signals and then send a wireless signal to a computer, to report on chemical levels and monitor disease.

To engineer the responsive end of such an antenna, the team first created a biocompatible base by boiling down silk and pouring the liquid solution into a centimeter-square film. The researchers then sprayed gold onto the silk film, using tiny stencils to create different patterns all along the film. Each area of the film responds to a different terahertz frequency depending on the shape of the gold pattern. The team then wrapped the patterned film around a capsule to form an antenna.

To test its performance, Omenetto and Averitt subjected the antenna to terahertz radiation and found that the antenna was resonant at specific frequencies. Going a step further, the researchers implanted the antenna in several layers of muscle tissue from a pig, and still detected a terahertz signal.

"We'll try to sense something next and maybe put the antenna in contact with something we'd like to detect, like glucose," says Omenetto. "We'll see if we can replicate a proof of principle, and try to add some meaning to the resonance."

Rajesh Naik, a materials science expert at the Air Force Research Laboratory at Wright-Patterson Air Force Base, says the research has great practical potential.

"Proteins and other molecules can be entrapped within silk films, allowing one to monitor in-vivo chemical reactions," says Naik. "Similar resonating structures can be patterned onto other polymeric materials, but silk has an added advantage of being biocompatible."

By Jennifer Chu
From Technology Review

Google Offers Cloud-Based Learning Engine

From Amazon's product recommendations to Pandora's ability to find us new songs we like, the smartest Web services around rely on machine learning--algorithms that enable software to learn how to respond with a degree of intelligence to new information or events.

Now Google has launched a service that could bring such smarts to many more apps. Google Prediction API provides a simple way for developers to create software that learns how to handle incoming data. For example, the Google-hosted algorithms could be trained to sort e-mails into categories for "complaints" and "praise" using a dataset that provides many examples of both kinds. Future e-mails could then be screened by software using that API, and handled accordingly.



Currently just "hundreds" of developers have access to the service, says Travis Green, Google's product manager for Prediction API, "but already we can see people doing some amazing things." Users range from developers of mobile and Web apps to oil companies, he says. "Many want to do product recommendation, and there are also interesting NGO use cases with ideas such as extracting emergency information from Twitter or other sources online."

Machine learning is not an easy feature to build into software. Different algorithms and mathematical techniques work best for different kinds of data. Specialized knowledge of machine learning is typically needed to consider using it in a product, says Green.

Google's service provides a kind of machine-learning black box-- ata goes in one end, and predictions come out the other. There are three basic commands: one to upload a collection of data, another telling the service to learn what it can from it, and a third to submit new data for the system to react to based on what it learned.
"Developers can deploy it on their site or app within 20 minutes," says Green. "We're trying to provide a really easy service that doesn't require them to spend month after month trying different algorithms." Google's black box actually contains a whole suite of different algorithms. When data is uploaded, all of the algorithms are automatically applied to find out which works best for a particular job, and the best algorithm is then used to handle any new information submitted.

"Getting machine learning to a Google scale is significant," says Joel Confino, a software developer in Philadelphia who builds large-scale Web apps for banks and pharmaceutical companies, and a member of the preview program. He used Prediction API to quickly develop a simple yet effective spam e-mail filter, and he says the service has clear commercial potential.

For example, a bank or credit-card company wanting to use machine learning to build systems that make decisions based on historical transactions is unlikely to have the specialized staff and necessary infrastructure for what is a computationally intensive approach. "This API could be a way to get a capability cheaply that would cost a huge amount through a traditional route."

Google's new service may also be more palatable to businesses wary of handing over their data to cloud providers, says Confino. "The data can be completely obfuscated, and you can still use this service. Google doesn't have to know if those numbers you are sending it are stock prices or housing prices."

Google does, however, get some information that it can use to improve its machine-learning algorithms. "We don't look at users' data, but we do see the same metrics on prediction quality that they do, to help us improve the service," says Green. The engineers running Prediction API will know if a particular algorithm is rarely used, or if a new one needs to be added to the mix to better process certain types of data.

Prediction API has the potential to be a leveler between established companies and smaller startups, says Pete Warden, an ex-Apple engineer now working on his own startup OpenHeatMap.com. "That's been a competitive advantage for large companies like Amazon, whose product recommendation is built on machine learning," he explains. "Now you still have to have a decent set of training data, but you don't have to have the same level of expertise."

Warden has yet to gain access to Prediction API, but has plans to use it to improve a service he built that shows where people using a particular word or phrase on Twitter are located. "It would be really interesting to also see where they are saying positive and negative things on a subject," says Warden. Prediction API could be trained to distinguish between positive and negative tweets to do that, he says.

Chris Bates, a data scientist with online music service Grooveshark and a member of the preview program, agrees that Google's black box will enable wider use of machine learning, but he contends that the service needs to mature. "Today it is good at predicting which language text is in and also sentiment analysis, for example to pick out positive and negative reviews," he says.

Ultimately, though, being unable to inspect the inner workings of the algorithms and fine-tune them for a specific use may have its limits. "It's good for cases that are not mission-critical, where you can afford a few false positives," Bates says. For example, a spam filter that occasionally lets through the occasional junk message could still be usable, but a credit-card company might be less able to accept any errors.

By Tom Simonite
From Technology Review

Darpa’s Butterfly-Inspired Sensors Light Up at Chemical Threats

It's the latest in a series of Darpa-funded efforts to use insects to spot weapons. Last year, the agency tapped researchers at Agiltron Corporation to implant larvae with micromechanical chemical sensors. In 2005, Darpa-backed scientists started training honey bees to become bomb sniffers.

This time, Darpa's interested in the chemical-sensing talents of butterflies. The agency's awarded $6.3 million to a consortium, led by GE Global Research, that'll develop synthetic versions of the nanostructures found on the scales of butterfly wings.

 The Pentagon's got a new game plan to detect deadly chemical threats: tiny, iridescent sensors that are designed to mimic one of nature's most colorful creatures: Butterflies.

The project's lead researcher, Dr. Radislav Potyrailo, likens the nanostructures on the butterfly wing scales, which each measure around 50 by 100 microns, to "tiles on a roof." The science of chemical response behind the structures is based on photonics. The wings of Morpho butterflies change spectral reflectivity depending on the exposure of the scales to different vapors. As Potyrailo and his team write in a 2007 paper, published in Nature Photonics, "this optical response dramatically outperforms that of existing nano-engineered photonic sensors."

"This is a fundamentally different approach," he tells Danger Room. "Existing sensors can measure individual gases in the environment, but they suffer, big time, from interferences. This approach overcomes that hurdle."
A single sensor would be tailored to detect certain types of chemical agents or explosives, and do so without hindrance from other chemicals, airborne molecules or even humidity. Water molecules, Potyrailo points out, can overload a dangerous gas that's sparsely distributed but "is still able to have actionable effects in a military setting." And, much like their biological inspiration, the sensors would do the job with remarkable specificity.
"It would be science fiction to say ‘here is my sensor, it can selectively detect 1,000 different chemicals'," he says. "But what we're saying is that we can detect and distinguish between several important chemicals - without making mistakes, without false responses."

At around 1 x 1 cm apiece, the sensors are also small enough to be attached to clothing, installed in buildings or deployed "like confetti" over widespread regions. And they'd have helpful civilian uses, as well, from food safety and water purification tests to emissions monitoring at power plants. So be careful, the next time you swat an insect. It just might save your skin.

From gizmodo.com

15,000 Beams of Light: Pens That Write With Light Offer Low-Cost, Rapid Nanofabrication Capabilities

A Northwestern University research team has done just that -- drawing 15,000 identical skylines with tiny beams of light using an innovative nanofabrication technology called beam-pen lithography (BPL).

Details of the new method, which could do for nanofabrication what the desktop printer has done for printing and information transfer, will be published Aug. 1 by the journal Nature Nanotechnology.

The Northwestern technology offers a means to rapidly and inexpensively make and prototype circuits, optoelectronics and medical diagnostics and promises many other applications in the electronics, photonics and life sciences industries.

Using beam-pen lithography, Northwestern researchers patterned 15,000 replicas of the Chicago skyline simultaneously over square centimeters of space in about half an hour. 

"It's all about miniaturization," said Chad A. Mirkin, George B. Rathmann Professor of Chemistry in the Weinberg College of Arts and Sciences and director of Northwestern's International Institute for Nanotechnology. "Rapid and large-scale transfer of information drives the world. But conventional micro- and nanofabrication tools for making structures are very expensive. We are trying to change that with this new approach to photolithography and nanopatterning."

Using beam-pen lithography, the researchers patterned 15,000 replicas of the Chicago skyline (featuring the Willis Tower and the John Hancock Center) simultaneously in about half an hour. Fifteen thousand tiny pens deposit the skylines over square centimeters of space. Conventional nanopatterning technologies, such as electron-beam lithography, allow one to make similarly small structures but are inherently low throughput and do not allow one to do large-area nanofabrication.

Each skyline pattern is made up of 182 dots, with each dot approximately 500 nanometers in diameter, like each pen tip. The time of light exposure for each dot was 20 seconds. The current method allows researchers to make structures as small as 150 nanometers, but refinements of the pen architecture likely will increase resolution to below 100 nanometers. (Although not reported in the paper, the researchers have created an array of 11 million pens in an area only a few centimeters square.)

Beam-pen lithography is the third type of "pen" in Mirkin's nanofabrication arsenal. He developed polymer-pen lithography (PPL) in 2008 and Dip-Pen Nanolithography (DPN) in 1999, both of which deliver chemical materials to a surface and have since been commercialized into research-grade nanofabrication tools that are now used in 23 countries around the world.

Like PPL, beam-pen lithography uses an array of tiny pens made of a polymer to print patterns over large areas with nanoscopic through macroscopic resolution. But instead of using an "ink" of molecules, BPL draws patterns using light on a light-sensitive material.

Each pen is in the shape of a pyramid, with the point as its tip. The researchers coat the pyramids with a very thin layer of gold and then remove a tiny amount of gold from each tip. The large open tops of the pyramids (the back side of the array) are exposed to light, and the gold-plated pyramids channel the light to the tips. A fine beam of light comes from each tip, where the gold was removed, exposing the light-sensitive material at each point. This allows the researchers to print patterns with great precision and ease.

"Another advantage is that we don't have to use all the pens at once -- we can shut some off and turn on others," said Mirkin, who also is professor of medicine and professor of materials science and engineering. "Because the tops of the pyramids are on the microscale, we can control each individual tip."

Beam-pen lithography could lead to the development of a desktop printer of sorts for nanofabrication, giving individual researchers a great deal of control of their work.

"Such an instrument would allow researchers at universities and in the electronics industry around the world to rapidly prototype -- and possibly produce -- high-resolution electronic devices and systems right in the lab," Mirkin said. "They want to test their patterns immediately, not have to wait for a third-party to produce prototypes, which is what happens now."

In addition to Mirkin, other authors of the paper are Fengwei Huo, Gengfeng Zheng, Xing Liao, Louise R. Giam, Jinan Chai, Xiaodong Chen and Wooyoung Shim, all from Northwestern.

From sciencedaily.com

The Science of a Sparkling Shave

For the last few months, Andre Flöter has been shaving with a diamond-tipped razor blade.
He's not some nouveau riche flaunting the newest kind of bling. He's the founder of GFD, a German company that for the last seven years has been selling blades that are coated with synthetic diamond and used for industrial purposes--such as medical scalpels and instruments that cut plastic sheeting. Now Flöter hopes to use the exceptional hardness of diamond to crack the multibillion-dollar market for consumer razor blades.
Seated in a café in Mannheim, Germany, a couple hours north of his office in Ulm (Albert Einstein's birthplace), Flöter whips out a plastic-handled razor that looks like ones you have at home. But inserted into this one is a prototype of GFD's diamond-tipped blades. 

Cutting edge: These photos show the stages in which GFD, a German company, takes a carbide blade, adds a coating of nanocrystalline diamond, and sharpens it with ions. 

He demonstrates against his own arm hair how it cuts as smoothly as a regular razor. He hands it to me so I can try, and it feels like my regular razor. But one major difference, Flöter says, is that his diamond-tipped blade should last several years rather than a few weeks.

The body of the blade is made of tungsten carbide, a dense metal compound, and seems just like a typical commercial razor blade, except it is a little heavier and has a darker metallic color. The coating of synthetic diamond--carbon manipulated at the nanoscale--in the tip doesn't make it look shiny at all.

Flöter won't reveal details of how GFD creates a film of synthetic diamond. He's more forthcoming about how the company's blades, once made, are sharpened. The engineers take dozens of blades and stand them upright in a vacuum chamber. Then they hit the blades with ions of oxygen or chlorine gas that has been excited to a plasma state with an electric field. The process is akin to using extremely fine-grained sandpaper as a sharpener. 

The resulting blade has a "radius of curvature"--the tiny edge of the blade, which is actually rounded at the microscopic level -- of about 50 nanometers. That's about 10 times sharper than the blades GFD sells for plastic sheet cutting. Flöter gives me his razor again: Not only does it cut when I press against my skin, as I would during a normal shave, but even just grazing the tips of my arm hair, the blade cuts with no effort at all.  
To be sure, blades made this way would make razors much more expensive. But because they could last much longer than a cheap disposable razor, the blades could be cost-effective in the long run, perhaps paying for themselves in about a year, GFD hopes. First, though, Flöter needs a blade manufacturer to partner with his seven-employee company. If all goes well, his blades could hit the market within two or three years, he says.

It wouldn't be the first time diamond blades were marketed; Schick used to sell a razor it called the FX Diamond. But it didn't cost much more than standard blades; Flöter says Schick didn't produce a substantially harder or longer-lasting blade because it didn't use a pure diamond coating and didn't sharpen it the way GFD does. A Schick spokeswoman declined to comment on GFD's technology.

By Cyrus Farivar

Heartbeats at the Speed of Light

For the first time, researchers have controlled the pace of an embryonic heart using pulses of light. The new method is a leap forward for cardiologists and developmental biologists, who hope it will help yield a better understanding of heart development and congenital heart disease. They also hope the development could eventually lead to new types of optical pacemakers.

Light beat: Researchers used light to control the heartbeat of this 53-hour-old quail embryo. The infrared-laser pulses were transmitted to the embryo via fiber-optic fiber (bottom) placed just a millimeter away from the developing heart.

Artificial pacemakers normally use electrodes to deliver regular, "paced" electrical impulses to the heart muscle to keep its beats consistent. While the devices are safe in the short term, they can cause damage to the muscle if used over decades. The technique's intrusive methods--which require contact with the heart --also limit its capabilities as a research tool. 

"If you're trying to use an electrode to touch the heart and stimulate it, the contacts could disrupt potential observation of the heartbeat," says Ed Boyden, a professor of biological engineering at MIT. Boyden was not involved in the research. "A noninvasive methodology for pulsing the heart is important for science. Potentially, this could open up a lot of experiments."

The idea of controlling cells with light is not new: Some labs have shown that neurons can be turned on and off with optical stimulation, and one group has used powerful laser pulses to pace cardiac cells in culture. But this is the first time that an entire heart in a live animal has been paced with light.

In the new technique, described today in the journal Nature Photonics, scientists placed an infrared laser fiber one millimeter above the developing hearts of two-day-old quail embryos. As they changed the pace of the laser pulses, the heartbeat shifted to match. "Noninvasively pacing a heart with light has different advantages and disadvantages from electrical stimulation," says Michael Jenkins, a postdoc in bioengineering at Case Western Reserve University in Cleveland, and the study's first author. "It has the potential to be used all the way from basic research to clinical applications." 

Jenkins and his colleagues are working on optical pacing of adult quail hearts now, using the same technique of threading an optic fiber into their bodies so that they are next to the heart. But the tissues are more opaque than those found in embryos, making light penetration more difficult. Because the adult quails are larger, it's also more difficult to find precisely the right spot on which to train the light.

Scientists hope that this type of nonintrusive method can help uncover more about cardiac development and the unknown causes of congenital heart disease. "We can use it to noninvasively study how environmental changes that change the heart rate might lead to heart defects," Jenkins says. "Or to study how changes in the heart rate during crucial stages of growth might change gene expression." 

Jenkins can't yet explain how the light causes the heart muscle to contract, and says that it may take years to figure this out. But he thinks it may be that the infrared wavelengths are heating up the cells, somehow changing the ability of a particular set of ion channels to transmit a charge. 

Douglas Cowan, an assistant professor at Children's Hospital in Boston, has been working on a different method for stimulating heart muscle function, one based on tissue engineering. He posits that Jenkins's approach might help nudge stem cells to differentiate into beating cardiac cells. 

Because light waves can't get through more than a few layers of tissue, the researchers suggest that, in humans, their technique might be particularly useful for pacing an exposed heart during surgery. 

Beyond that, an implanted optical pacemaker--while still a far-off prospect--is an idea that many researchers believe is worth pursuing. "In kids, electric pacing for 15 or 20 years causes the heart not to develop so well," Jenkins says. While this is less worrying for a 75-year-old, it can be a big concern for a pediatrician who's putting a pacemaker into a three-year-old. "If optical pacing were able to pace the heart in such a way that it beats more intrinsically, that could be an advantage," he says.

By Lauren Gravitz
From Technology Review

Better Displays Ahead

Several e-reader products on the market today use electrophoretic displays, in which each pixel consists of microscopic capsules that contain black and white particles moving in opposite directions under the influence of an electric field. A serious drawback to this technology is that the screen image is closer to black-on-gray than black-on-white. Also, the slow switching speed (~1 second) due to the limited velocity of the particles prevents integration of other highly desirable features such as touch commands, animation, and video.

 This is a prototype of the vertical stack multi-color electrowetting display device is shown in the photograph. Arrays of ~1,000-2,000 pixels were constructed with pixel sizes of 200 × 600 and 300 × 900 µm.

Researchers at the University of Cincinnati Nanoelectronics Laboratory are actively pursuing an alternative approach for low-power displays. Their assessment of the future of display technologies appears in the American Institute of Physics' Applied Physics Letters.

"Our approach is based on the concept of vertically stacking electrowetting devices," explains professor Andrew J. Steckl, director of the NanoLab at UC's Department of Electrical and Computer Engineering. "The electric field controls the 'wetting' properties on a fluoropolymer surface, which results in rapid manipulation of liquid on a micrometer scale. Electrowetting displays can operate in both reflective and transmissive modes, broadening their range of display applications. And now, improvements of the hydrophobic insulator material and the working liquids enable EW operation at fairly low driving voltages (~15V)."

Steckl and Dr. Han You, a research associate in the NanoLab, have demonstrated that the vertical stack electrowetting structure can produce multi-color e-paper devices, with the potential for higher resolution than the conventional side-by-side pixel approach. Furthermore, their device has switching speeds that enable video content displays.

What does all of this mean for the consumer? Essentially, tablets and e-readers are about to become capable of even more and look even better doing it. Compared to other technologies, electrowetting reflective display screens boast many advantages. The electrowetting displays are very thin, have a switching speed capable of video display, a wide viewing angle and, just as important, Steckl says, they aren't power hogs.

From sciencedaily.com

A New Way to Use the Sun's Energy

A new type of device that uses both heat and light from the sun should be more efficient than conventional solar cells, which convert only the light into electricity.

The device relies on a physical principle discovered and demonstrated by researchers at Stanford University. In their prototype, the energy in sunlight excites electrons in an electrode, and heat from the sun coaxes the excited electrons to jump across a vacuum into another electrode, generating an electrical current. The device could be designed to send waste heat to a steam engine and convert 50 percent of the energy in sunlight into electricity--a huge improvement over conventional solar cells.

The most common silicon solar cells convert about 15 percent of the energy in sunlight into electricity. More than half of the incoming solar energy is lost as heat. That's because the active materials in solar cells can interact with only a particular band of the solar spectrum; photons below a certain energy level simply heat up the cell.

Bright heat: Nicholas Melosh has developed a device for simultaneously converting the sun’s light and heat into electricity. Melosh makes and tests the device in this vacuum chamber in his lab at Stanford University.

One way to overcome this is to stack active materials on top of one another in a multijunction cell that can use a broader spectrum of light, turning more of it into electrical current instead of heat, for efficiencies up to about 40 percent. But such cells are complex and expensive to make.

Looking for a better way to take advantage of the sun's heat, Stanford's Nicholas Melosh was inspired by highly efficient cogeneration systems that use the expansion of burning gas to drive a turbine and the heat from the combustion to power a steam engine. But thermal energy converters don't pair well with conventional solar devices. The hotter it is, the more efficient thermal energy conversion becomes. Solar cells, by contrast, get less efficient as they heat up. At about 100 °C, a silicon cell won't work well; above 200 °C, it won't work at all.

The breakthrough came when the Stanford researchers realized that the light in solar radiation could enhance energy conversion in a different type of device, called a thermionic energy converter, that's conventionally driven solely by heat. Thermionic converters consist of two electrodes separated by a small space. When the positive electrode, or cathode, is heated, electrons in the cathode get excited and jump across to the negative electrode, or anode, driving a current through an external circuit. These devices have been used to power Russian satellites but haven't found any applications on the ground because they must get very hot, about 1,500 °C, to operate efficiently. The cathode in these devices is typically made of metals such as cesium.

Melosh's group replaced the cesium cathode with a wafer of semiconducting material that can make use of not only heat but also light. When light strikes the cathode, it transmits its energy to electrons in the material in a way that's similar to what happens in a solar cell. This type of energy transfer doesn't happen in the metals used to make these cathodes in the past, but it's typical of semiconductor materials. It doesn't take quite as much heat for these "preëxcited" electrons to jump to the anode, so this new device can operate at lower temperatures than conventional thermionic converters, but at higher temperatures than a solar cell.

The Stanford researchers call this new mechanism PETE, for photon-enhanced thermionic emission. "The light helps lift the energy level of the electrons so that they will flow," says Gang Chen, professor of power engineering at MIT. "It's a long way to a practical device, but this work shows that it's possible," he says.

The Stanford group's prototype, described this month in the journal Nature Materials, uses gallium nitride as the semiconductor. It converts just about 25 percent of the energy in light into electricity at 200 °C, and the efficiency rises with the temperature. Stuart Licht, professor of chemistry at George Washington University, says the process would have an "advantage over solar cells" because it makes use of heat in addition to light. But he cautions: "Additional work will be needed to translate this into a practical, more efficient device."

The Stanford group is now working to do just that. The researchers are testing devices made from materials that are better suited to solar energy conversion, including silicon and gallium arsenide. They're also developing ways of treating these materials so that the device will work more efficiently in a temperature range of 400 °C to 600 °C; solar concentrators would be used to generate such high temperatures from sunlight.

Even at high temperatures, the photon-enhanced thermionic converter will generate more heat than it can use; Melosh says this heat could be coupled to a steam engine for a solar-energy-to-electricity conversion efficiency exceeding 50 percent. These systems are likely to be too complex and expensive for small-scale rooftop installations. But they could be economical for large solar-farm installations, says Melosh, a professor of materials science and engineering. He hopes to have a device ready for commercial development in three years.

By Katherine Bourzac 
From Technology Review

A Sticker Makes Solar Panels Work Better

The power output of solar panels can be boosted by 10 percent just by applying a big transparent sticker to the front. Developed by a small startup called Genie Lens Technologies, the sticker is a polymer film embossed with microstructures that bend incoming sunlight. The result: the active materials in the panels absorb more light, and convert more of it into electricity.

The technology is cheap and could lower the cost per watt of solar power. Also, unlike other technologies developed to improve solar panel performance, this one can be added to panels that have already been installed.

 Power film: A thin plastic sheet covered with microscopic structures is applied to the front of a solar panel to increase the amount of light it absorbs.

The polymer film does three main things, says Seth Weiss, CEO and cofounder of Genie Lens, based in Englewood, CO. It prevents light from reflecting off the surface of solar panels. It traps light inside the semiconductor materials that absorb light and convert it to electricity. And it redirects incoming light so that rather than passing through the thin semiconductor material, it travels along its surface, increasing the chances it will be absorbed. 

Researchers designed the microstructures that accomplish this by using algorithms that model how rays of light behave as they enter the film and encounter various surfaces within the solar panel--the protective glass cover, the semiconductor material, and the back surface of the panel--throughout the day. The key was bending the light the optimal amount, enough that it enters the solar panel at an angle, but not so much of an angle that the light reflects off and is lost. If light does reflect off either the glass or semiconductor surfaces, the film redirects much of it back into the solar panel. 

Tests at the National Renewable Energy Laboratory showed that the film increases power output on average between 4 percent and 12.5 percent, with the best improvement under cloudy conditions, when incoming light is diffuse. Adding the film--either in the factory, which is optimal, or on solar panels already in use--increases the overall cost of solar panels by between 1 percent and 10 percent. But the panels would then produce enough additional electricity to justify the price. What's more, increasing the power output of a solar panel decreases other costs--such as shipping and installation--because fewer solar panels are required at each installation, says Travis Bradford, a solar industry analyst and president of the Prometheus Institute.

Yet the overall benefit depends on how long the polymer film lasts. The cost per kilowatt hour of solar power is figured by estimating the total power output of the solar panel over its 20- to 25-year warranty. If the film is scratched, attracts dust, or becomes discolored after years or decades in the sun, it could actually lower power output over time. "Durability is a big issue," Bradford says. The materials used in solar panels today have been tested over decades, and although Weiss says his company's films will last for 20 years, their durability hasn't been verified. 

Meanwhile, many solar panel companies are developing related approaches for increasing the amount of light a solar panel will absorb. For example, Innovalight, based in Sunnyvale, CA, has developed a method for printing silicon nanoparticles that can improve the amount of light conventional crystalline silicon solar panels absorb. It's working with two major solar manufacturers, JA Solar and Yingl, to commercialize the technology. Unlike many of these other approaches, which are developed for particular kinds of solar panel materials, the Genie Lens films can be applied to any type of solar panel--including crystalline silicon and newer thin-film solar panel technology. 

By Kevin Bullis 
From Technology Review