Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........


IBM Creates the Most Detailed Map of the Brain To Date

In a paper published earlier this week, IBM researchers made huge strides in mapping the architecture of the brain, charting three times as many connections as any previous study. Where does such a map lead? The future of cognitive computing.


Specifically, the study traced long-distance connections in the brain of a Macaque monkey, the "interstate highways" which transmit information between distant areas of the brain. Said one of the researchers:

Their map depicts 6,602 long-distance connections between 383 different regions of the brain, allowing researchers to grasp how and where the brain sends information better than ever before.

Such data will allow scientists to more accurately perform theoretical analysis—the same type of projections that optimize search engines or track social networks—which will be essential in developing computer chips that can keep up with our brain's immense computational power and navigate its complex architecture.

DIY Wearable Computer Turns You Into a Cyborg

Someday humans and computers will meld together to create cyborgs. But instead of waiting for it, Martin Magnusson, a Swedish researcher and entrepreneur, has taken the first step and created a wearable computer that can be slung across the body.


Magnusson has hacked a pair of head-mounted display glasses and combined it with a homebrewed machine based on a open source Beagleboard single computer. Packed into a CD case and slung across the shoulder messenger-bag style, he is ready to roll.

A computer is a window to the virtual world, says Magnusson.
"But as soon as I get up and about, that window closes and I'm stuck within the limits of physical reality," he says. "Wearable computers make it possible to keep the window open. All the time."

Magnusson's idea is interesting though one step short of integrating a machine inside the body. In 2008, a Canadian film maker Rob Spence decided to embed a tiny video camera into his prosthetic left eye. Spence who is still working on the project hopes to someday record everything around him as he sees it and lifecast it.
For his wearable computer, Magnusson is using a pair of Myvu glasses that slide on like a pair of sunglasses but have a tiny video screen built into the lens. A Beagleboard running Angstrom Linux and a Plexgear mini USB hub that drives the Bluetooth adapter and display forms the rest of this rather simple machine. Four 2700 mAh AA batteries are used to power the USB hub. Magnusson has used a foldable Nokia keyboard for input and is piping internet connectivity through Bluetooth tethering to an iPhone in his pocket.
Magnusson says he wants to use the wearable computer to "augment" his memory.

"By having my to-do list in the corner of my eye, I always remember the details of my schedule," he says.
Check out photos of his gear:

The innards of the homebrewed machine are glued to a CD case. The CD case is slung across the shoulder by attaching it to a strap using velcro.


What the homebrewed computer looks like:



Genes to Make Hydrocarbon Fuels

Many species naturally make small amounts of hydrocarbons. Now researchers at the startup LS9, based in South San Francisco, CA, have described the genes and enzymes responsible for this production of alkanes, the major components of fuels such as diesel. The findings, reported in the current issue of the journal Science, have allowed the researchers to engineer E. coli bacteria that can secrete alkane hydrocarbons capable of being burned in diesel engines. LS9 had previously reported using bacteria to produce hydrocarbon fuel, but this is the first time the researchers have revealed how they did it. "This is the first characterization of these enzymes. Virtually nothing was known about what enzymes were responsible, and how do they do it," says Frances Arnold, a professor of chemical engineering, bioengineering, and biochemistry at Caltech. Arnold was not involved in the LS9 work. The discovery "opens up a whole new set of possibilities," she says. "These reactions are very interesting. Nature has made a few versions of them. Now, in the laboratory, we can make many more versions, so your imagination can run wild." Any commercial applications Arnold and others discover, however, will likely require a licensing agreement with LS9, which has filed for a patent for its discovery.

Fuel gene: The cyanobacteria pictured here naturally produce hydrocarbons that can be used as fuel in diesel engines. Researchers at LS9 have now identified the genes that are responsible.

The LS9 researchers discovered the genes involved by comparing the genomes of 10 strains of cyanobacteria (also called blue-green algae) that naturally produce alkanes with a very similar strain that produces no alkanes. They identified 20 genes that the alkane-producing strains had but that the non-alkane-producing strain lacked. From there, the researchers narrowed down the possibilities until they identified the genes and enzymes necessary for alkane production. They confirmed their discovery by incorporating the genes into E. coli and measuring the alkanes that the bacteria subsequently made. The bacteria secrete the alkanes, which can then by easily collected and used as a fuel.

Organisms make alkanes via a complex process that produces fatty acids from carbon dioxide or sugars. The fatty acids are then converted by the organisms to an aldehyde that includes a carbon atom bonded to an oxygen atom (together they create what's called a carbonyl group). The enzyme aldehyde decarbonylase helps remove this group to form a chain of hydrogen and carbon atoms--the hydrocarbon. The natural process produces a collection of hydrocarbons of various lengths that are comparable to the hydrocarbon molecules in diesel.

Several research groups at universities and companies have been searching for ways to make renewable fuels that are similar enough to petroleum-based gasoline, diesel, and jet fuel to be used in existing vehicles. Such fuels would be more versatile than ethanol, which can't be used in high concentrations in ordinary engines. The LS9 discovery is a boost to this effort. 

But work remains before the genes can make commercial quantities of fuel at prices that can compete with fossil fuels. "This is a long way from describing a commercially viable process for making alkanes," Arnold says. "Fuel has to be dirt cheap. It's not clear that we're ever going to make it cheap and easy by this route." One fundamental challenge is scaling up the process. "It all comes down to whether you can send enough carbon through that pathway to get to industrial levels," she says. 

LS9 has been genetically engineering E. coli to optimize the process by which the bacteria converts sugar into fuel.For example, the E. coli naturally feeds on some of the fatty acids it produces, rather than using them as a feedstock for producing alkanes. LS9 is altering the bacteria so that they don't eat the fatty acids, which helps increase fuel yields, says Andreas Schirmer, LS9's associate director of metabolic engineering. 

The company's first fuel to market will probably not be a hydrocarbon. Four years ago, the company began developing a fuel based on fatty esters that it says could serve as a replacement for diesel fuel, and is closer to market than the hydrocarbon fuel it started developing two years ago when it first identified the alkane genes and enzymes.

By Kevin Bullis 
From Technology Review

Infectious Prions Can Arise Spontaneously in Normal Brain Tissue, Study Shows

The catalyst in the study was the metallic surface of simple steel wires. Previous research showed that prions bind readily to these types of surfaces and can initiate infection with remarkable efficiency. Surprisingly, according to the new research, wires coated with uninfected brain homogenate could also initiate prion disease in cell culture, which was transmissible to mice.

The findings are being published in the online edition of the journal Proceedings of the National Academy of Sciences (PNAS).



"Prion diseases such as sporadic Creutzfeldt-Jakob disease in humans or atypical bovine spongiform encephalopathy, a form of mad cow disease, occur rarely and at random," said Charles Weissmann, M.D., Ph.D., chair of Scripps Florida's Department of Infectology, who led the study with John Collinge, head of the Department of Neurodegenerative Disease at UCL Institute of Neurology. "It has been proposed that these events reflect rare, spontaneous formation of prions in brain. Our study offers experimental proof that prions can in fact originate spontaneously, and shows that this event is promoted by contact with steel surfaces."
Infectious prions, which are composed solely of protein, are classified by distinct strains, originally characterized by their incubation time and the disease they cause. These toxic prions have the ability to reproduce, despite the fact that they contain no nucleic acid genome.

Mammalian cells normally produce harmless cellular prion protein (PrPC). Following prion infection, the abnormal or misfolded prion protein (PrPSc) converts PrPC into a likeness of itself, by causing it to change its conformation or shape. The end-stage consists of large aggregates of these misfolded proteins, which cause massive tissue and cell damage.

A Highly Sensitive Test
In the new study, the scientists used the Scrapie Cell Assay, a test originally created by Weissmann that is highly sensitive to minute quantities of prions.
Using the Scrapie Cell Assay to measure infectivity of prion-coated wires, the team observed several unexpected instances of infectious prions in control groups where metal wires had been exposed only to uninfected normal mouse brain tissue. In the current study, this phenomenon was investigated in rigorous and exhaustive control experiments specifically designed to exclude prion contamination. Weissmann and his colleagues in London found that when normal prion protein is coated onto steel wires and brought into contact with cultured cells, a small but significant proportion of the coated wires cause prion infection of the cells -- and when transferred to mice, they continue to spawn the disease.

Weissmann noted that an alternative interpretation of the results is that infectious prions are naturally present in the brain at levels not detectable by conventional methods, and are normally destroyed at the same rate they are created. If that is the case, he noted, metal surfaces could be acting to concentrate the infectious prions to the extent that they became quantifiable by the team's testing methods.

The first author of the study, "Spontaneous Generation of Mammalian Prions," is Julie Edgeworth of the UCL Institute of Neurology. Other authors of the study include Nathalie Gros, Jack Alden, Susan Joiner, Jonathan D.F. Wadsworth, Jackie Linehan, Sebastian Brandner, and Graham S. Jackson, also of the UCL Institute of Neurology.

The study was supported by the U.K. Medical Research Council.

From sciencedaily.com

Largest Particle Accelerator 'Rediscovers' Fundamental Subatomic Particles

First results from the LHC at CERN are being revealed at the International Conference on High Energy Physics (ICHEP), the world's largest international conference on particle physics, which has attracted more than 1000 participants to its venue in Paris. The spokespersons of the four major experiments at the LHC -- ALICE, ATLAS, CMS and LHCb -- are presenting measurements from the first three months of successful LHC operation at 3.5 TeV per beam, an energy three and a half times higher than previously achieved at a particle accelerator.

 Particle tracks fly out from the heart of the ALICE experiment from one of the first collisions at a total energy of 7 TeV.

With these first measurements the experiments are rediscovering the particles that lie at the heart of the Standard Model -- the package that contains current understanding of the particles of matter and the forces that act between them. This is an essential step before moving on to make discoveries. Among the billions of collisions already recorded are some that contain 'candidates' for the top quark, for the first time at a European laboratory.

"Rediscovering our 'old friends' in the particle world shows that the LHC experiments are well prepared to enter new territory," said CERN's Director-General Rolf Heuer. "It seems that the Standard Model is working as expected. Now it is down to nature to show us what is new."

The quality of the results presented at ICHEP bears witness both to the excellent performance of the LHC and to the high quality of the data in the experiments. The LHC, which is still in its early days, is making steady progress towards its ultimate operating conditions. The luminosity -- a measure of the collision rate -- has already risen by a factor of more than a thousand since the end of March. This rapid progress with commissioning the LHC beam has been matched by the speed with which the data on billions of collisions have been processed by the Worldwide LHC Computing Grid, which allows data from the experiments to be analysed at collaborating centres around the world.

"Within days we were finding Ws, and later Zs -- the two carriers of the weak force discovered here at CERN nearly 30 years ago," said Fabiola Gianotti, spokesperson for the 3000-strong ATLAS collaboration. "Thanks to the efforts of the whole collaboration, in particular the young scientists, everything from data-taking at the detector, through calibration, data processing and distribution, to the physics analysis, has worked fast and efficiently."

"It is amazing to see how quickly we have 're-discovered' the known particles: from the lightest resonances up to the massive top quark. What we have shown here in Paris is just the first outcome of an intense campaign of accurate measurements of their properties." said Guido Tonelli, spokesperson for CMS. "This patient and systematic work is needed to establish the known background to any new signal."

"The LHCb experiment is tailor-made to study the family of b particles, containing beauty quarks," said the experiment's spokesperson Andrei Golutvin, "So it's extremely gratifying that we are already finding hundreds of examples of these particles, clearly pin-pointed through the analysis of many particle tracks."

"The current running with proton collisions has allowed us to connect with results from other experiments at lower energies, test and improve the extrapolations made for the LHC, and prepare the ground for the heavy-ion runs," said Jurgen Schukraft, spokesperson for the ALICE collaboration. This experiment is optimized to study collisions of lead ions, which will occur in the LHC for the first time later this year.

Two further experiments have also already benefited from the first months of LHC operation at 3.5 TeV per beam. LHCf, which is studying the production of neutral particles in proton-proton collisions to help in understanding cosmic-ray interactions in the Earth's atmosphere, has already collected the data it needs at a beam energy of 3.5 TeV. TOTEM, which has to move close to the beams for its in-depth studies of the proton, is beginning to make its first measurements.

CERN will run the LHC for 18-24 months with the objective of delivering enough data to the experiments to make significant advances across a wide range of physics processes. With the amount of data expected, referred to as one inverse femtobarn, the experiments should be well placed to make inroads in to new territory, with the possibility of significant discoveries.

From sciencedaily.com

Intel photonics link hits 50 Gbps

Intel researchers have developed a silicon-based, optical data connection prototype capable of transferring up to 50 gigabits per second.


Currently, computer components are linked to each other via copper cables or traces on circuit boards. However, metals such as copper are prone to signal degradation when transferring data over long distances. This effectively limits the design of computers - forcing processors, memory and other components - to be placed just inches from each other.


But Intel's silicon-based optical data connection could eventually allow the industry to replace traditional connections with extremely thin and light optical fibers capable of transferring gigabits of data over long distances. 

According to Intel CTO Justin Rattner, silicon photonics will likely have multiple applications across the computing industry.
"For example, at these data rates one could imagine a wall-sized 3D display for home entertainment and videoconferencing with a resolution so high that the actors or family members appear to be in the room with you.



"And tomorrow's datacenter or supercomputer may see components spread
throughout a building or even an entire campus, communicating with each other at high speed, as opposed to being confined by heavy copper cables with limited capacity and reach."



Rattner explained that a silicon photonic-based data center would allow datacenter users - including search engine companies or cloud computing providers - to increase performance, while saving significant costs in space and energy. "[For now, though], the 50Gbps link is akin to a 'concept vehicle' that allows [us] to test new ideas and develop technologies [which] transmit data over optical fibers, using light beams from low cost and easy to make silicon.

"[Although] telecommunications and other applications already use lasers to transmit information, current technologies are too expensive and bulky to be used for PC applications."

The 50Gbps Photonics Link prototype comprises a silicon transmitter and receiver chip. The chip is equipped with four hybrid silicon lasers whose light beams each travel into an optical modulator that encodes data onto them at 12.5Gbps. 

 The four beams are then combined and output to a single optical fiber for a total data rate of 50Gbps. 



At the other end of the link, the receiver chip separates the four optical beams and directs them into photo detectors, which convert data back into electrical signals. 

It should be noted that Intel researchers are already working to accelerate data rates by scaling modulator speeds and increasing the number of lasers per chip.

This could eventually lead to the development of terabit/s optical links – with rates fast enough to transfer the entire contents of a typical laptop in just one second.

By Aharon Etengoff
From tgdaily.com 

A Cheaper Way to Catch CO2

Adding carbon-capture technology to a conventional coal plant can nearly double the price of the electricity it produces. This fact represents one of the big obstacles to passing legislation to regulate carbon-dioxide emissions. Now researchers at Codexis, based in Redwood City, CA, are using genetically engineered enzymes to make carbon-dioxide capture less expensive--their method could increase electricity costs by less than a third.

Gene machine: A Codexis researcher operates a high-volume liquid-handling system used to make gene variants--part of a process for engineering new enzymes.

The new enzymes increase the efficiency, by a factor of 100, of a solvent used to capture carbon dioxide. This promises to decrease the energy needed to capture and store the greenhouse gas. The researchers developed new ways to engineer enzymes that can operate at the high temperatures inside a coal plant's smokestack. 

The standard way to capture CO2 is to use a solvent called monoethanolamine (MEA). Carbon dioxide is absorbed by the solvent, which separates it from the other flue gases. To store the carbon dioxide it has to be freed by applying heat--this produces a pure stream of carbon dioxide that can be compressed and permanently sequestered. The energy required to do this decreases the power output of a coal plant by about 30 percent. Combined with the extra equipment and materials needed to capture the CO2, this increases the cost of the electricity produced by roughly 80 percent. Codexis's approach could limit this cost increase to 35 percent or less, says James Lalonde, the company's vice president of biochemistry and engineering R&D.

Researchers at Codexis genetically modified an enzyme, called carbonic anhydrase, involved with respiration in many organisms, including humans. Carbonic anhydrase helps a solvent called methyl diethanolamine (MDEA) bind with carbon dioxide. The most challenging problem was altering the enzymes so they could survive at the high temperatures found in smokestacks. The enzymes can survive at temperatures around 25 °C, but quickly stop working at temperatures higher than 55 °C to 65 °C.

Codexis's early results show that its modified enzymes can survive at temperatures above 85 °C for half an hour. This is high enough for the enzyme to survive in smokestacks, but not at the temperatures needed for freeing the carbon dioxide for storage (130 °C). Lalonde says the company has seen large improvements since these initial results were disclosed, but the company hasn't released the new figures yet.

The company has successfully engineered enzymes for drug development in the past. It has won two "green technology" awards from the U.S. Environmental Protection Agency for developing enzymes for making two drugs--atorvastatin, the active ingredient in the cholesterol-lowering drug Lipitor, and sitagliptin, the active ingredient in the diabetes drug Januvia. The enzymes simplified drug synthesis and reduced waste.

Codexis uses a proprietary version of directed evolution. In its simplest form, directed evolution involves making random changes to existing genes. These mutations alter one amino acid in the enzyme at a time. The genes that work best are then selected and changed to further increase performance. Codexis's researchers have developed a faster version of the process that involves swapping relatively large segments of the gene sequence--making multiple changes to amino acids each time. They've also developed computational techniques that allow them to determine what parts of the gene are most likely to lead to improvements in performance if they are modified. The changes make the process more efficient, and lead to big changes in performance in a relatively short amount of time.

"Codexis's technology has certainly proven itself quite powerful," says Stefan Lutz, associate professor of biomolecular chemistry at Emory University. He cautions that it may be more difficult to work with carbon dioxide than with pharmaceuticals. "If they do succeed, it would be a huge deal," he says.

By Kevin Bullis 
From Technology Review

Building Super-Fast Electronics Components

For years, researchers have touted graphene as the magic material for the next generation of high-speed electronics, but so far it hasn't proved practical. Now a new way of making nanoscale strips of carbon--the building block of graphene--could kick-start a shift toward superfast graphene components.

Graphene strips: The zigzag-shaped graphene nanoribbons in this image are a nanometer wide, 50 nanometers long.

The new method, which involves building from the molecular scale up, comes from researchers at the Max Planck Institute for Polymer Research in Germany and Empa in Switzerland. With atomic-level precision, the researchers made graphene nanoribbons about a nanometer wide.

The molecule-thick carbon material called graphene outperforms silicon, which is currently used in electronic components, in every way. It conducts electricity better than silicon, it bends more easily, and it's thinner. Using graphene instead of silicon could lead to faster, thinner, more powerful electronic devices. However, unless graphene sheets are less than 10 nanometers wide and have clean edges, they lack the electronic properties needed before manufacturers can use them for devices like transistors, switches, and diodes--key components in circuitry. 

The Swiss team fabricated these skinny graphene strips by triggering molecular-scale chemical reactions on sheets of heated gold. This let the team precisely control the width of the nanoribbons and the shape of their edge. Molecules were arranged into long fibers on the gold surface. When that surface was heated, adjacent strings linked and fused to form ribbon structures about one nanometer across, with a uniform zigzag edge.
"The beauty of that is that it can be done with atomic precision," says Roman Fasel, the corresponding author on the study. "It's not cutting, it's assembling." 

Other ways of making nanoribbons involve peeling strips of graphene from a larger sheet, etching them with lithography, or unzipping cylinder-shaped carbon nanotubes. But such nanoribbons are thicker and have random edges.

"In nanoribbons, he who controls the edges wins," says James Tour, a graphene expert at Rice University, who was not involved with the work. "There is no way yet to take a big sheet of graphene and chop it up with this level of control." 

"This type of nanoribbon would enrich and open up new possibilities for graphene electronics," says Yu-Ming Lin, a researcher working on graphene-based transistors at the IBM T. J. Watson Research Center in New York. 

Graphene nanoribbons are still a long way from practical application, says Tour. "The next step is to make a handful of devices. That's not hard to do the big step is to orient it en masse."

But the success of Fasel and his team's chemical method, Tour says, will encourage more research into fine-tuning the steps so that nanoribbons of this quality can be produced on a large scale. For instance, researchers can now experiment with the finer edge structure and electronic effects of the new nanoribbons, testing theories that, to date, they could only simulate on computers.

"It points the direction rather than being a final result," says Walter de Heer, a researcher at the Georgia Institute of Technology who has developed a way to grow graphene on silicon chips. "It's a first step in a long chain of steps that will lead to graphene electronics." 

By Nidhi Subbaraman
From Technology Review

Revealing the True Colors of Masterworks

Enhancements to image-processing technologies for colorizing black-and-white images are helping curators divine the colors used by the French artist Henri Matisse on his landmark work Bathers by a River--while the painting was still a work in progress

The tricks deployed by curators could be more widely relevant to other colorizing applications where it's not obvious what the colors should be in a black-and-white image of a piece of art, or in cases where subtle differences are important and should be highlighted, such as in medical images.

 Shades of Matisse: To figure out what Bathers by a River looked like in 1913--four years before Matisse finished it--curators and computer scientists digitized a black-and-white photo from the time (top) and then colorized it (center). The finished painting (bottom) is shown as it appears today.

Researchers at Northwestern University used information about Matisse's prior works, as well as color information from test samples of the work itself, to help colorize a 1913 black-and-white photo of the work in progress. Matisse began work on Bathers in 1909 and unveiled the painting in 1917. 

In this way, they learned what the work looked like midway through its completion. "Matisse tamped down earlier layers of pinks, greens, and blues into a somber palette of mottled grays punctuated with some pinks and greens," says Sotirios A. Tsaftaris, a professor of electrical engineering and computer science at Northwestern. That insight helps support research that Matisse began the work as an upbeat pastoral piece but changed it to reflect the graver national mood brought on by World War I. 

The process was more complex than the methods used routinely for colorizing old movies and family photographs. In those kinds of applications, backgrounds such as skies, clothing, and skin tones are "more homogeneous and thus easier to extrapolate," says Tsaftaris. The color of an entire sky can be determined from a relatively small batch of pixel data, he said. It's far harder in a black-and-white image of a piece of color art, because "the painter works from a very unique palette of colors that is particular to him, that he sees in his mind," adds Aggelos Katsaggelos, a professor of electrical engineering and computer science at Northwestern, who collaborated with Tsaftaris. The researchers made a high-resolution digital version of the 1913 photograph to work from. The photograph itself contained crucial clues to colors and their saturation levels. But to draw a more complete picture, the scientists and their collaborators needed more data.

They took multiple digital photos of Bathers in its current form, going quadrant by quadrant to obtain a resolution of 4,000-by-5,000 pixels. Finally, they included information from historical accounts of what the painting looked like in 1909 and again in 1913, drawing on research by curators at the Art Institute of Chicago. 

Finally, they used some sample data from their collaborators at the Art Institute of Chicago: cross-sections of the hidden paint layers on Bathers, obtained by removing microscopic core samples of the painting for spectroscopic analysis.

They applied all of this information to help colorize the photograph, taken by photographer Eugene Druet in November 1913.

And when all of the data sources were combined, it allowed the researchers to transfer colors onto their digital photo of the old photo. The algorithm's job at that point was to propagate the transferred colors across the entire digital photo, pixel by pixel, to rediscover some of the painting's 1913 appearance.

"This research is an excellent example of collaborative research between computer science, art conservation, and art history," says Roy S. Berns, a chemist and color scientist at the Rochester Institute of Technology. "The historians bring their connoisseurship of the artist and their oeuvre. The conservators contribute their knowledge of artist materials and the artist's working method. The computer scientists facilitate the visualization in a physically realistic way. Because the physical data are sparse, collaboration is required to ensure the result is plausible."

The effort took three years, and the scientists and conservators say they held back their findings until they reached a 95 percent confidence level about the colorized image.

The algorithm can be tweaked to work with other similar situations and other artists. While this algorithm was "customized to work on paintings and on the particular style of Matisse," Tsaftaris said, "we can turn off some options, and it works on other paintings as well." 

Tsaftaris sees future applications of custom colorization, particularly in the medical field. The scientists are considering using their new methods to pseudo-colorize grayscale cardiac magnetic resonance images (MRI) to make it easier for doctors to read, analyze, and render a diagnosis. In this case, they might use cues gleaned from color images of diseased hearts, for example, to inform their work in how to properly colorize black-and-white MRI images to bring out the most relevant distinctions.

By Tom Mashberg
From Technology Review

A Flu Vaccine without the Needle

Getting vaccinated for the flu or other infections could become as easy as pressing a patch onto the skin--no shot in the arm required

No needle: Microneedles made of a polymer that dissolves in body tissues can be used to deliver vaccines directly and painlessly into the skin. The microneedles shown here, inserted into pig skin, dissolve in a matter of minutes.

A new paper published in Nature Medicine describes a patch that holds an array of microneedles that administer a vaccine and dissolve painlessly. That could make it possible for people to get inoculated more easily and even administer their own vaccines.

Most vaccines are delivered by an injection into muscle. But Mark Prausnitz, lead author of the paper and a chemical and biological engineer at Georgia Institute of Technology, says that the surface of the skin could be a better entry point. Because the body expects to encounter harmful invaders on its surfaces, the skin is loaded with cells that can launch an immune response--a key step for a vaccine to work.

Researchers have investigated other microneedle patches as a way to deliver drugs. This version, a collaboration between the labs of Prausnitz and Richard Compans, a microbiologist at Emory, adds an innovation: The needles are constructed out of a polymer that dissolves in bodily fluids. Just several hundred micrometers in length, the needles can penetrate the outer layers of the skin before melting away in a few minutes. As they do so, a vaccine encapsulated in the needles travels into the skin. Only a thin biodegradable backing is left behind, and it washes away in water.

Researchers tested the patches on mice and found that the animals that received the flu vaccine by skin patch could fight off an infection 30 days later just as well as mice that had received an injection. Furthermore, mice vaccinated through the skin had a much lower level of virus in their lungs, suggesting that the patch could provoke a more effective immune response.

Prausnitz and his collaborators are seeking funding for a clinical trial of the influenza vaccine patch in humans. They are also investigating the possibility of using a similar system for other types of infectious diseases.

Samir Mitragotri, a chemical engineer at the University of California, Santa Barbara, says that the work is "highly innovative," and that the dissolving microneedles solve two important problems in immunization: They are painless, and they avoid the need to dispose of medical waste.

Bruce Weniger, a flu vaccine researcher at the Centers for Disease Control and Prevention, adds that the patches would be less invasive for patients and easier to deliver to remote populations. As such, they could help make vaccination campaigns easier in developing countries. But Weniger adds that economics may get in the way of replacing existing vaccine shots with patches. 

By Courtney Humphries
From Technology Review

Human Trials Next for Darpa’s Mind-Controlled Artificial Arm

Pentagon-backed scientists are getting ready to test thought-controlled prosthetic arms on human subjects, by rewiring their brains to fully integrate the artificial limbs. 


Already in recent years, we've seen very lifelike artificial arms, monkeys nibbling bananas with mind-controlled robotic limbs, and even humans whose muscle fibers have been wired to prosthetic devices. But this is the first time that human brains will be opened up, implanted with a neural interface, and then used to operate an artificial limb.

It's a giant step that'll transform the devices, which were little more than hooks and cables only 50 years ago. And the progress is courtesy of Darpa, the Pentagon's far-out R&D agency, who've been sponsoring brain-controlled replacement limbs as part of their Revolutionizing Prosthetics Program.

A team of scientists at Johns Hopkins, who've been behind much of Darpa's prosthetic progress thus far, have received a $34.5 million contract from the agency to manage the next stages of the project. Researchers will test the Modular Prosthetic Limb (MPL) on a human. The test subject's thoughts will control the arm, which "offers 22 degrees of motion, including independent movement of each finger," provides feedback that essentially restores a sense of touch, and weighs around nine pounds. That's about the same weight as a human arm.

The prosthetic will rely on micro-arrays, implanted into the brain, that record signals and transmit them to the device. It's a similar design to that of the freaky monkey mind-control experiments, which have been ongoing at the University of Pittsburgh since at least 2004.

Within two years, Johns Hopkins scientists plan to test the prosthetic in five patients. And those researchers, alongside a Darpa-funded consortium from Caltech, University of Pittsburgh, University of Utah and the University of Chicago, also hope to expand prosthetic abilities to incorporate pressure and touch.

"The goal is to enable the user to more effectively control movements to perform everyday tasks, such as picking up and holding a cup of coffee," Michael McLoughlin, the project's program manager, says.

In other words, prosthetic arms that are remarkably similar to the real thing. But the long-term caliber of the MPL arm remains an open question. Just three months ago, Darpa launched a new program to overcome several problems with neuro-prosthetic models - most notably, the two-year lifespan of those implanted neural recording devices.


Big mystery holding back practical superconductors may have been solved

Superconductors carry electric current with no energy loss. They could revolutionize our electrical grid, but they only work at impractically low temperatures. We just figured out a key reason why – and possibly got a lot closer to room-temperature superconductors.

Scientists have spent the last two decades trying to figure out why their superconductors only work at temperatures barely any higher than absolute zero. They've been able to identify the so-called "pseudogap" phase, which is a temperature range below room temperature at which superconductivity breaks down. We know there's something about what happens to electrons during this phase that makes superconductors fail, but until now we couldn't figure out what, despite several frustrating attempts to find out.

But physicists working for the Department of Energy may have just solved the mystery. Working with copper-oxide superconductors, they identified a change in electron behavior that only occurs during the pseudogap phase. Specifically, they keyed in on how easily electrons could "jump" from each copper and oxygen site to the tip of a microscope needle.

The difference in electron behavior was remarkably obvious, explains project leader Séamus Davis:


"Picture the copper atom at the center of the unit, with one oxygen to the 'north' and one to the 'east,' and this whole unit repeating itself over and over across the copper-oxide layer. In every single copper-oxide unit, the tunneling ability of electrons from the northern oxygen atom was different from that of the eastern oxygen."
Finding such a clear break in symmetry is very exciting, because there's a ton of precedent for such asymmetries revolutionizing our understanding of other systems. For instance, the discovery of broken symmetries in liquid crystals gave scientists the guidance needed to control the crystal, and now liquid crystal displays (or, as they're more commonly known, LCD screens) are commonplace and inexpensive. The hope is that a similarly huge leap in understanding of superconductors will come from uncovering this asymmetry in the pseudogap phase.
The researchers hope to find similar broken symmetries in other copper-oxide superconductors. They are also trying to figure out how the asymmetry affects electron flow, how this in turn affects superconductivity, and how to work around these issues to make room temperature superconductors a practical possibility.
There's still much work to do, but as Davis explains, the potential benefits are incalculable:
"Developing superconductors that operate without the need for coolants would be transformational. Such materials would greatly improve the efficiency of energy-distribution systems, saving enormous amounts of money and updating the electrical grid to meet the needs of the 21st Century."
Currently, the only working superconductors have to operate at extremely low temperatures. The fact that they operate with no resistance and thus no energy loss is theoretically a huge savings, but in practice it's completely canceled out by the huge amount of exotic coolants needed to get them to such temperatures.


Handmade knife chipped from fiber optic glass

Flint (and glass) knapping is no longer practiced on a large scale, but it used to be the primary method of making weapons for primitive cultures. In this day and age of course, it’s easy to go to the sporting goods store and pick up a quality steel knife, but it wasn’t always so. 


There are still people out there that practice the art (and I do mean art) of knapping; one such artist created this knife from fiber optic glass, and offers them for sale on his web site. Personally, I doubt I would ever use such a knife for fear of breaking it, but it does make an amazing display piece. If you want one, it’ll cost you $165 – a small price to pay considering the amount of time it must have taken to hand make this knife from a piece of glass. Remember, one mistake, and you have to start over.

by Dave Freeman  
From crunchgear.com

Microsoft's Terapixel Project Creates Clearest, Biggest Night Sky Map Yet


First they gave us a high-res tour of Mars — now Microsoft has made the largest and clearest night-sky map ever. It's a terapixel image: 1,000,000,000,000 pixels.

The software giant's Terapixel project stitched together 1,791 pairs of red-light and blue-light plates from telescopes in California and Australia. The result is the map above, which covers the night sky of the northern and southern hemispheres.

Using WorldWide Telescope and Bing maps, you can zoom in on the cosmos, peering through the dust of the Milky Way to distant galaxies. Microsoft announced Terapixel July 13 at its annual Research Faculty Summit.
To view every pixel of the image, you'd need a half-million high-definition televisions. If you tried to print it, the document would extend the length of a football field, Microsoft says.

The project required re-computing all the image data collected by the Digitized Sky Survey during the past 50 years. The images, produced by the Palomar telescope in California and the Schmidt telescope in Australia, each cover an area of the cosmos six and a half degrees square.

The map's quality and clarity stems from computerized changes to the original images, which have varying levels of brightness, color saturation, noise and vignetting, which is darkening of the corners.

Developers ran parallel code on 512 computer cores in a Windows High Performance Computing cluster, and were able to process the raw digitized data in about half a day, according to Microsoft. Once the files were decompressed, they had to undergo some changes to correct the vignetting problem. Red and blue plates had to be precisely aligned to make a color image, and then everything had to be stitched together, which took about three more hours.

Terapixel then used an image optimization program to create a seamless, spherical panorama of the sky. That took about four hours, according to Microsoft.
The final image is 802 GB.



From gizmodo.com

Brighter Color for Reflective E-Reading Displays

Electronic paper that reflects light, instead of filtering it from a backlight, as most conventional displays do, is easy on the eyes and saves on battery life. But this reliance on ambient light becomes a handicap when trying to make a bright, beautiful color display. Researchers at HP are addressing the problem by developing new materials that use ambient light to create a more vibrant color for video-capable, low-power screens.

Conventional displays, including LCDs, use a backlight to produce light, and layers of optics to filter it to create different colors. This type of display needs a lot of power because most of the light is lost during filtering. 

Vibrant reflections: Red, yellow, and magenta test swatches made from novel luminescent materials are shown next to an array of color standards used to evaluate the quality of displays. Researchers at HP are using these materials to develop more vibrant reflective displays.

Reflective displays need no backlight. For example, the pixels in the displays made by E Ink, the dominant electronic-paper company, are filled with black and white capsules of opposite charges; when the pixels are switched, the white or black particles move to the surface, reflecting or absorbing ambient light.

Making color electronic paper is a major challenge, and the prototypes made so far look muddy and dim compared to conventional displays. Adding color filters over black-and-white pixel arrays--the approach taken by E Ink--introduces the same light-loss problems that LCDs suffer from. But in an LCD, the backlight can be pumped up to maintain brightness. Reflective displays are limited to ambient light, and that loss can't be recovered. Another problem is that the colored subpixels used in color displays typically sit side by side, with one-third of the area of each pixel given over to each color: red, blue, and green. When the pixel is reflecting red light, two-thirds of the incident light is simply lost, no matter how good the filter is. Gary Gibson, a scientist in the information surfaces lab at the company's Palo Alto, CA, is involved with a project aimed at addressing the dimness problem using brighter, luminescent materials. The company has developed a composite material that converts blue and green light into red and another that converts blue light into green. It isn't practical to make a blue luminescent pixel. A fast-switching liquid-crystal shutter sits above each pixel and lets light in and out; mirrors below also help light escape.

Developing luminescent materials that convert the color of light is a major materials-science challenge. "There aren't any materials in nature that do all the things we'd like," Gibson says. The group has developed composites for each color. In the red composite, for example, blue and green light is passed along from dye molecule to dye molecule, gradually converting it to the red wavelength with as little loss as possible. Blue remains a challenge because there's not enough higher-wavelength light in sunlight or ambient room lighting to convert to blue. So the company's prototypes either use a conventional, larger blue sub-pixel or rely on blue light in a white subpixel to achieve sufficient brightness.

In theory, the HP materials should be brighter than a perfect color reflector, says Gibson. So far, Gibson says, they've made materials that are stable over time, and have demonstrated these materials in optical systems similar to those that could be used in a display. As they continue to tinker with the materials, HP researchers are developing manufacturing systems for complete displays. Gibson says they should be compatible with high-volume production processes such as ink-jet printing.

The popularity of the iPad shows that there is "clearly an appetite for color electronic gizmos for reading magazines, books, and other content," says Nick Colaneri, director of the Flexible Display Center at Arizona State University. "Vibrant, color e-paper will feed off that, and will multiply the market," he predicts.

Down the road, HP may combine reflective displays with flexible, rugged plastic electronics being developed as part of another project from the Palo Alto labs. "That would be really innovative," says Paul Semenza, a senior analyst at industry research firm Display Search. "A flexible, low-power color display is the Holy Grail," he adds. "The key thing is, can they identify and manufacture all the materials and get it to work as it seems it should?"

Meanwhile, E Ink product manager Lawrence Schwartz says the company's color electronic paper will be in products at the end of the year. The company is compensating for some of the light loss through the color filters by capitalizing on improvements in its ink formulations to produce higher contrast between white and black. The company is also improving the switching speed of its displays, which will eventually mean more animation and video.

By Katherine Bourzac 
From Technology Review

Unmanned Boeing plane can stay in the air for 4 days

Boeing has unveiled a new hydrogen-powered plane that is not only one of the greenest and most lightweight aircraft the company has ever introduced, it also has the power to stay aloft for 4 days without a pilot. The Phantom Eye's only byproduct is water and it doesn't weight that much more than a standard car, although its 150-foot wingspan makes it a pretty large vehicle nonetheless. Designed for covert and government operations, the Phantom Eye is one of the first hydrogen-powered planes to date. Boeing introduced the first aircraft with a hydrogen fuel cell in 2008, though that one was a manned plane.


This potentially revolutionary new plane can reportedly carry a 450-pound payload and can reach cruising speeds of 170 miles per hour. Because of low energy requirements, it can stay in the air for 4 days before needing to be refueled.

The two four-cylinder, 2.3-liter engines were supplied by Ford, each carrying 150 horsepower. The engines are not that different from what users would find in a Ford car, which is why the plane is kept relatively lightweight.

Construction of the plane is already complete and it just needs to undergo testing, which will begin later this summer at a NASA facility. Its first flight is set to be sometime next year.

By Mike Luttrell
From tgdaily.com 

Fermilab denies it's found 'God particle'

Fermilab is denying reports that its Tevatron particle accelerator has detected a Higgs boson - the so-called God Particle. Physicist Tommaso Dorigo suggested last week in a blog post that the lab had discovered a Higgs effect.


"It reached my ear, from two different, possibly independent sources, that an experiment at the Tevatron is about to release some evidence of a light Higgs boson signal. Some say a three-sigma effect, others do not make explicit claims but talk of a unexpected result," he said. 

"That the result comes from the Tevatron is for sure, since the LHC experiments do not have nearly enough data yet to search for that elusive particle, and other particle physics experiments in the world have not nearly enough energy to produce it."

Meanwhile, Lubos Motl says he's heard the same thing from a prominent physicist.
"I've heard that there's a rumor going around Aspen that the Tevatron will be announcing discovery of gluon + b → b + Higgs, which would then require large tan(beta), which would fit the MSSM. I guess we'll find out in a couple of weeks," he quotes the physicist as saying.

The Higgs boson is predicted to exist by the Standard Model of particle physics - but is the only particle postulated by that theory that has never been observed. If it exists, it could explain the existence of mass.
Fermilab is denying the story. "Let's settle this: the rumors spread by one fame-seeking blogger are just rumors. That's it," it says in a tweet.

But the denial stops short of saying categorically that no evidence for the particle has been found. A definitive answer one way or the other is likely to emerge next week at the International Conference on High Energy Physics in Paris.

By Emma Woollacott
From tgdaily.com 

Gene Therapy for Eye Diseases

The pharmaceutical giant Genzyme has started a clinical trial to see whether a drug to treat macular generation could be delivered via long-lasting gene therapy rather than monthly injections.

Eye colors: Drusen, the yellow flecks in this image of the retina, are common in people with age-related macular degeneration. These flecks are made up of proteins involved in the part of the immune system called the complement system, which has also been implicated in the disease by genetic studies.

A drug called Lucentis, made by Genetech, has proved effective at treating the wet form of age-related macular degeneration, which can lead to blindness. Some 200,000 Americans a year are diagnosed with the disease. But Lucentis has to be injected into the eye every month or two, a burden for patients and doctors.
Lucentis binds to and neutralizes a wound-healing growth factor known as VEGF. This binding action stalls the excess growth of blood vessels in the eye that characterizes age-related macular degeneration. Genzyme's gene therapy drug, officially called AAV2-sFLT01, would insinuate itself into the patient's retinal cell to produce the same VEGF-binding protein as Lucentis over far longer periods--up to several years.

A phase 1 clinical trial of Genzyme's gene therapy treatment began at the end of May. Three patients received the treatment, according to Sam Wadsworth, a Genzyme group vice president in charge of gene and cell therapy. Preliminary results should be available in about a year.

The trial is one of a handful worldwide seeking to prove the effectiveness of gene therapy for eye diseases. The Genzyme trial also involves using new type of virus as the delivery mechanism. Early results of a federally funded trial to deliver normal-functioning genes to patients with a rare retinal disease known as type 2 leber congenital amaurosis, or LCA, have confirmed that this "viral vector" has merit for eye treatments, several researchers say.

The LCA trials "demonstrated success both in terms of safety and ability to introduce the gene and have efficacy and success," said Jeffrey S. Heier, an assistant professor at Tufts University School of Medicine and director of retinal research at Ophthalmic Consultants of Boston, a private practice group, who is involved in the Genzyme research. "This study is taking the virus vector that they used, and [Genzyme has] taken what has really been the success of the anti-VEGF story and they've packaged the two together."

Eyes have been an early target for gene therapy because they are small--meaning they require relatively little active dose, they are self-contained, and because the tools of eye surgery have advanced enough to make the treatments possible. The drug has to be delivered to the retina, a thin film lining the inner wall of the eye. Instrumentation has improved in recent years to allow injections through the retina without piercing it, said Shalesh Kaushal, chairman of ophthalmology at University of Massachusetts Memorial Medical Center and UMass Medical School.

To Kaushal, who is involved in the Genzyme study as well as the LCA research, the big challenge will be broadening the use of gene therapy to dozens more diseases, and using that understanding to eventually reach beyond the eye. "If one could understand those fundamental cellular, biochemical events and identify targets, you might have the chance to treat many diseases with a single gene-therapy construct," Kaushal said.

Earlier gene therapy programs used a type of virus called adenovirus to target genes, but both the LCA and Genzyme trials are using adeno-associated virus, which is far less inflammatory and which expresses itself over longer periods than adenovirus, therefore making the treatment last longer, Wadsworth said. Viruses are used to deliver gene therapies because they are adept at getting through cell walls.

VEGF is involved in vascular cell growth throughout the body, and its expression increases in the presence of a wound. Studies have shown that with Lucentis, virtually all the VEGF-binding protein stays within the eye, and does not significantly affect VEGF levels elsewhere in the body, Wadsworth says. Genzyme's drug will provide even lower levels of the VEGF-binding protein, so it's expected that the drug will not have any adverse affects throughout the body, he said.

The trick will be getting the cells to produce enough VEGF-binding protein to help patients, said Peter Campochiaro, a professor at the Wilmer Eye Institute at Johns Hopkins Medicine, who is involved in the research. In addition to establishing safety, the current phase 1 trial will explore four different doses of the study drug. "There's no reason why this shouldn't work, other than if the expression of the gene is not sufficient. That's really what this trial should determine," Campochiaro says. "It appears that the more you suppress VEGF and the more you keep it suppressed, the better the outcomes."

By Karen Weintraub
From Technology Review

New Obesity Drug Could Have Fewer Side Effects

A novel compound called Lorcaserin helped overweight and obese people lose about 5 percent of their body weight, according to a study published today in the New England Journal of Medicine. While the weight reduction is modest, the drug could have fewer side effects than others that appear to be more effective.


The drug, developed by San Diego-based Arena Pharmaceuticals, acts on a specific subset of receptors for the chemical messenger serotonin. These receptors play a role in satiety, the feeling of fullness. 

The company has been working to develop such a compound since the 1990s, when research linked appetite to some serotonin receptors in the brain. "This is the first tailor-made molecule to target receptors involved in producing satiety and reducing caloric intake," says Arne Astrup, a physician at the University of Copenhagen, who wrote an editorial accompanying the paper in the NEJM. "I think we now have a much better understanding of the biology of these systems working in the brain and the peripheral effects than we did when some of the first compounds came on market. I think we feel more confident that this is a safe drug."
Several weight-loss drugs have been pulled from the market or abandoned in late-stage development because of dangerous side effects. Long-term safety is a major concern, since previous studies suggest that in many cases such drugs must be taken continuously to maintain weight loss. "Weight-loss drugs only work as long as you take them," says Steven Smith, a physician at Florida Hospital in Winter Park, FL, who led the study and is a paid consultant to Arena. "They are more like cholesterol or blood-pressure drugs than antibiotics, which cure an infection and can then be stopped." 

Lorcaserin was designed to target a subset of serotonin receptors called 5-HTC2. This specificity is in contrast to that of fenfluramine, a weight-loss drug often prescribed in the 1990s in combination with a second drug called phentermine (the combination was known as fen-phen). Fenfluramine was pulled off the market in 1997 after it was linked to heart-valve problems and pulmonary hypertension. These side effects were thought to come from the drug's action on a set of serotonin receptors, 5-HT2B, found on heart and lung cells. By targeting brain receptors specifically, Lorcaserin appears to avoid these side effects.

In the new study, funded by Arena, scientists studied more than 3,000 people; half were given the drug, half took a placebo. After a year, approximately 55 percent of the drug group and 44 percent of the placebo group remained in the trial. Nearly 50 percent of the Lorcaserin group lost 5 percent of their body weight, compared to about 20 percent of the placebo group. About 10 percent of the drug group lost 10 percent of their body weight, compared to 7 percent of the placebo group. (Weight loss of about 10 percent is linked to decreases in cholesterol level and blood sugar.) Those who took the drug for a second year were more likely to maintain their weight loss; nearly 70 percent did so, compared to about half of those given a placebo in the second year.

The rate of weight loss with Lorcaserin is modest compared to other drugs in development, says Sajani Shah, an obesity expert at Tufts Medical Center, in Boston, who was not involved in the study. "But this drug looks more safe so far," she says. 

According to the study, the most common side effects were headaches, dizziness, and nausea. Most significantly, scientists did not see an increase in heart-valve disease in people who took Lorcaserin, though the researchers will need to show this is true in a larger number of patients in order to satisfy the U.S. Food and Drug Administration. The drug also appears to lack the psychiatric side effects, such as depression, seen with some other weight-loss drugs. Japanese drugmaker Eisai bought marketing rights to the drug on July 1.
Weight-loss drugs have had a troubled past. The field's greatest recent hope, a drug called rimonabant, was pulled from the European market in 2007 due to an increased risk of depression and suicidal thinking. (It was not approved in the United States.) A second drug, called sibutramine, was withdrawn in Europe this year after being linked to an increased risk of stroke and heart attack. (The FDA plans to review the drug later this year.) And in May, the FDA warned that a drug called orlistat, sold over the counter as Alli, is linked to a rare type of liver injury.

Lorcaserin is one of three experimental weight-loss drugs the FDA will review this year, and the only novel compound in the group. The other two drugs, Qnexa and Contrave, are both combinations of existing drugs originally designed to target addiction, depression, and obesity. An FDA committee reviewed data on Qnexa, a combination of the weight-loss drug phentermine and the anticonvulsant topiramate, this week. A report released Tuesday concluded that the drug is effective but has some safety concerns. For example, clinical studies show it is linked to mild to moderate psychiatric side effects, such as depression. 

By Emily Singer 
From Technology Review

Tiny Springs Could Reduce Microchip Waste

Using springs and glue instead of solder to make electronic connections between computer chips could end one of the electronics industry's most wasteful habits, say researchers at the Palo Alto Research Center and Oracle.

Spring board: Metal springs turn the connection of computer chips to circuit boards into a reversible process, making it possible to replace a broken chip without throwing out the whole board.

"The whole industry is based on nonreworkable technology like solder or tape," explains Eugene Chow, of PARC. "If one chip in a module of several doesn't work after you've soldered them down, you have to throw out the whole thing."

Chow and colleagues are fine-tuning an alternative approach. They pattern a surface with microscale springs that compress slightly under a chip's weight, and these form a lasting, secure electronic connection when the two surfaces are glued together. "You can turn it on, and if it works great, do a final bond with adhesive," says Chow. "If it doesn't work, you can just take off the die that failed and replace it."

For now, the collaborators are developing their springy approach for the high-performance processors used in supercomputers or high-end servers. These chips are combined in closely packed groups known as multichip modules. Such modules need the processors to be packed closely together in order to speed the transfer of signals between them.

"I think it's just a matter of course that this approach will get to the lower-end applications, too, though," says Chow. "Eventually this could be in a high-end cell phone--everyone wants to get more chips into everything, and this can help, because the pitch [the horizontal distance between connections] can be so small." The team has shown that their springs can be made as close together as six microns, compared to the tens of microns necessary with solder connections.

The springs are flat metallic strips that curve up from a substrate that a chip is fixed to. "Fundamentally it's the simplest spring you can imagine," says Chow. The spring-building process starts with the addition of a thin titanium layer to the substrate. On top of this, the spring material is deposited in such a way that builds strain into the top layer. Photolithography is used to carve out the outlines of the many springs before the titanium is etched away from underneath. 

"The tension makes the springs simply pop up," says Chow. "It's an elegant way of making a three-dimensional structure." The finished spring is coated with a layer of gold for added strength and a better electronic connection. The manufacturers must design the layout of the springs so that they match up to the contacts on the chips. Small sapphire balls or other peg-like structures on the surface of the substrate fit into notches in the chip to ensure the two are positioned correctly.

Last month, Chow and colleagues presented their work at the Electronics Components and Technology Conference in Las Vegas. They showed that their approach works on a test chip from Oracle that simulates the electrical and thermal behavior of a high-end processor. "It's a test vehicle to evaluate the finished module," Chow explains. The test chip has nearly 4,000 180-square-micron cells, each containing a thermometer, sensors to measure the power supplied to that part of the chip, and a heater so that the overall chip pumps out the same heat as a high-power processor working at full capacity.

Another reason to think beyond solder, says Chin Lee, a professor of electrical engineering and computer science at the University of California, Irvine, is the fact that it will soon limit the industry's ability to make ever-smaller devices. "Alternatives are needed, because solder is not going to continue to shrink," says Chin.
Manufacturers can position the electronic springs more accurately than solder, and this can boost performance, for example by letting them arrange the chips in more compact groups, says Chow. In the race to make faster chips, he says, chip makers can often overlook the ways that components are connected and packaged. "This isn't a glamorous field," says Chow. "Everyone focuses on transistors and components, but packaging is a real bottleneck for performance."

Bahgat Sammakia, director of the Small Scale Systems Integration and Packaging Center at Binghamton University, agrees. "You can have the best technology in the world, but without packaging, you won't get the best performance from them; it is what enables the creation of the finished systems we are aiming for."
Sammakia says that although research into novel approaches to packaging chips is valuable, ultimately the market must decide whether a particular solution will work. "You can always solve a problem, but not always in a way that is commercial."

Jennifer Ernst, PARC's director of business development, says the project is being directly shaped by what is possible at commercial scale. "Our first priority is to get this into manufacturing," she says. She notes that the springs are made simply, using just a few layers of metal and standard deposition and etching processes. "We are currently making these at our own fab, but expect the volume to be cost-competitive at commercial scale," she says.

By Tom Simonite
From Technology Review