Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........


2010 preview: The polyglot web

Imagine what browsing the web would be like if you had to type out addresses in characters you don't recognise, from a language you don't speak. It's a nightmare that will end for hundreds of millions of people in 2010, when the first web addresses written entirely in non-Latin characters come online.

Mapping the internet as it goes truly global


Net regulator ICANN - the Internet Corporation for Assigned Names and Numbers - conceded in October that more than half of the 1.6 billion people online use languages with scripts not fully compatible with the Latin alphabet. It is now accepting applications for the first non-Latin top level domains (TLDs) - the part of an address after the final "dot". The first national domains, counterparts of .uk or .au, should go live in early 2010. So far, 12 nations, using six different scripts, have applied and some have proudly revealed their desired TLD and given a preview of what the future web will look like.

The first Arabic domain is likely to be Egypt's and in Russia orders are already being taken for the country's hoped-for new TLD. The address HOBЫЙyЧеНЫЙ.pф - a rough translation of "newscientist" with the Cyrillic domain that stands for Russian Federation - can be registered today.

Though they will be invisible to many of today's users, these changes are a bellwether for the web's future. Today Latin-script languages predominate. But before long Chinese will overtake English as the most used language, and web use in other places with scripts of their own, such as India and Russia, is growing fast. The Middle East is spawning new users faster than any other region.

The image below, portraying links between blogs, represents just one facet of the ever-changing shape of the internet. More corrections like the arrival of non-Latin domain names are sure to come as the network underlying everyday life starts to properly live up to its "worldwide" monicker.


by Tom Simonite

Johns Hopkins scientists discover a controller of brain circuitry

By combining a research technique that dates back 136 years with modern molecular genetics, a Johns Hopkins neuroscientist has been able to see how a mammal's brain shrewdly revisits and reuses the same molecular cues to control the complex design of its circuits. Details of the observation in lab mice, published Dec. 24 in Nature, reveal that semaphorin, a protein found in the developing nervous system that guides filament-like processes, called axons, from nerve cells to their appropriate targets during embryonic life, apparently assumes an entirely different role later on, once axons reach their targets. In postnatal development and adulthood, semaphorins appear to be regulating the creation of synapses — those connections that chemically link nerve cells.

"With this discovery we're able to understand how semaphorins regulate the number of synapses and their distribution in the part of the brain involved in conscious thought," says David Ginty, Ph.D., a professor in the neuroscience department at the Johns Hopkins University School of Medicine and a Howard Hughes Medical Institute investigator. "It's a major step forward, we believe, in our understanding of the assembly of neural circuits that underlie behavior."

Because the brain's activity is determined by how and where these connections form, Ginty says that semaphorin's newly defined role could have an impact on how scientists think about the early origins of autism, schizophrenia, epilepsy and other neurological disorders.

The discovery came as a surprise finding in studies by the Johns Hopkins team to figure out how nerve cells develop axons, which project information from the cells, as well as dendrites, which essentially bring information in. Because earlier work from the Johns Hopkins labs of Ginty and Alex Kolodkin, Ph.D., showed that semaphorins affect axon trajectory and growth, they suspected that perhaps these guidance molecules might have some involvement with dendrites.

Kolodkin, a professor in the neuroscience department at Johns Hopkins and a Howard Hughes Medical Institute investigator, discovered and cloned the first semaphorin gene in the grasshopper when he was a postdoctoral fellow. Over the past 15 years, numerous animal models, including strains of genetically engineered mice, have been created to study this family of molecules.

Using two lines of mice — one missing semaphorin and another missing neuropilin, its receptor — postdoctoral fellow Tracy Tran used a classic staining method called the Golgi technique to look at the anatomy of nerve cells from mouse brains. (The Golgi technique involves soaking nerve tissue in silver chromate to make cells' inner structures visible under the light microscope; it allowed neuroanatomists in 1891 to determine that the nervous system is interconnected by discrete cells called neurons.)

Tran saw unusually pronounced "spines" sprouting willy-nilly in peculiar places and in greater numbers on the dendrites in the neurons of semaphorin-lacking and neuropilin-lacking mice compared to the normal wild-type animals. It's at the tips of these specialized spines that a lot of synapses occur and neuron-to-neuron communication happens, so Tran suspected there might be more synapses and more electrical activity in the neurons of the mutant mice.

The researchers tested this hypothesis by examining even thinner brain slices under an electron microscope.

The spines of both semaphorin-lacking and neuropilin-lacking mice were dramatically enlarged, compared to those of the smaller, spherical-looking spines in the wild-type mice. In wild types, Tran generally noted a single site of connection per spine. In the mutants, the site of connection between two neurons was often split.

Next, the team recorded the electrical output of mutant and wild-type neurons and found that the mutants, with more spines and larger spines, also had about a 2.5-times increase in the frequency of electrical activity, suggesting that this abnormal synaptic transmission is due to an increase in the number of synapses.

What causes synapses to form or not form in appropriate or inappropriate places is an extremely important and poorly understood process in the development of the nervous system, Kolodkin says, explaining that the neurons his team studies can have up to 10,000 synaptic connections with other neurons. If connections between neurons are not being formed how and where they're supposed to, then miscommunication occurs and circuits malfunction; as a result, any number of diseases or disorders might develop.

"Seizures can be interpreted as an uncontrolled rapid-firing of certain neural circuits," Kolodkin asserts. "Clearly there's a deficit in these animals that has a human corollary with respect to epilepsy. It's also thought that schizophrenia and autism spectrum disorders have developmental origins of one sort or another. There likely are aspects to the formation of synapses — if they're not in the correct location and in the correct number — that lead to certain types of defects. The spine deficits in these mice that are lacking semaphorin or its receptor appear very similar to those that are found in Fragile X, for instance."

Johns Hopkins Medical Institutions

IBM Backs an OS for the 'Private Cloud'


An open-source Web-based operating system called eyeOS is getting a big boost from IBM. The computer giant has begun selling high-end mainframe servers with eyeOS pre-installed, hoping the operating system will entice customers who are hesitant about using cloud computing.

Managed by a small company based in Barcelona, eyeOS lets users access a virtual desktop through a Web browser. The user can treat that virtual desktop like the desktop of a regular PC, launching and running applications within it.

Though individuals can use the operating system over the Internet through a site hosted by eyeOS, IBM makes it possible for customers to host the service themselves. With the software installed on the mainframe server, a company could offer virtual desktops to its employees, who could then access their "work computers" from any device.

Unlike projects like Google's ChromeOS, which is designed to let people access the entire world of Web applications through the browser, eyeOS is designed to access a specific set of applications "installed" on the virtual desktop. Using the system, an organization could provide employees with productivity applications, its own custom applications, and access to proprietary data. The ability to access these through a single Web-based operating system, says the project's founder, Pau Garcia-Mila, saves users from needing passwords to different Web-based services. It also allows the applications to be more compatible with each other.

Cloud computing most often means running data and applications on remote servers hosted by a company such as Amazon.com. New technologies allow the hosting company to share its processing and storage resources efficiently among all its customers, enabling it to offer low prices. Customers of cloud providers save money because the rates are low, they don't have to buy their own equipment, and they can buy just as much computing power as they need, changing the quantity as their demands fluctuate.

Desktop in a browser: IBM hopes customers who buy its mainframe servers will use an included Web-based operating system called eyeOS, shown above, to experiment with cloud computing technologies. Credit: eyeOS


IBM's goal with this product is to help customers build "private clouds," since some companies hesitate to host data and applications on public clouds, often due to concerns about security and reliability. The idea of a private cloud is to set up--on a company's own servers--the same sorts of efficiencies used by cloud providers, without having to entrust sensitive data to an outside organization.

"For most well-established, large enterprises, there is in general some distrust with public cloud services," says IBM's mainframe cloud initiative leader, Andrea Greggo. "This is driving the focus on wanting to contain these environments behind [a] firewall but still benefit from the value of cloud."

Customers can use IBM's new servers for the data processing typically expected of mainframes, but Greggo says the servers also let customers take advantage of products such as eyeOS.

But Frank Gillett, a principal analyst at Forrester Research, calls the term "private cloud" an oxymoron. He compares what IBM offers to virtualization services already offered by companies such as VMWare.

Gillett acknowledges that eyeOS is different from other virtual desktop systems because it allows users to access the desktop through a Web browser instead of a special application. Nonetheless, he remains skeptical because eyeOS is not based on a popular operating system such as Microsoft Windows. He believes many businesses will stick with virtualization services that let them use familiar software. Though some companies have tried to build Web-based operating systems, he says, "None of these startups have made it into the mainstream conversations."

By Erica Naone

How to Detect Explosives at Airports

The bomb that Umar Farouk Abdulmutallab reportedly tried to set off as his flight neared Detroit on Christmas could have been detected using existing screening technologies, had they only been used. Not only could the explosives have been spotted using back-scatter X-rays or millimeter wave technology--which can see through clothes--invisible traces of the explosive could have been detected using chemical sensors. But both technologies, if used to screen all passengers, would lead to long lines at airport security checkpoints.

These images were recorded with millimeter wave technology.

The main explosive used has been identified at pentaerythritol tetranitrate (PETN). Unlike some explosives, it does not produce enough vapors to be detected through sealed containers. But it could be detected by swabbing a person's hands or the outside of a briefcase, or in devices at airports that use puffs of air to dislodge trace explosives particles, which are then sampled and tested via spectrometers. They could also be detected by new materials that glow (or stop glowing) in response to certain explosives, and can be much faster than the technology used now in airports.

A secondary explosive--a liquid--also appears to have been used in the attempt to set off the bomb. This likely would have produced enough enough vapor that it could be detected through a sealed container using handheld devices that are starting to be used in some airports. Depending on the exact explosive used, it could have been detected, for example, by this handheld sensor.

PETN would also easily show up on scanners that use X-rays or those that use 30 to 300 gigahertz electromagnetic waves (called millimeter wave scanners) that are able to distinguish between skin and other materials. Explosives should show up particularly clearly because they're typically denser than other materials, says Aimee Rose, a principle researcher at ICx Technologies, which makes explosives detection equipment. "One of the reasons that they are explosive is that they're a lot denser than typical materials. You need a lot of material in a small area."

But neither of these technologies is as fast as a standard metal detector, and they're expensive and can be difficult to operate, making it impractical to install enough of them to keep people moving quickly. "Any of these systems are likely going to slow down checkpoints," Rose says, although she notes time could be saved by eliminating the need for pat downs.

Such systems, however, could eliminate other security measures--such as being restricted to your seat for the last hour of flights or not being able to use blankets--that are both inconvenient and may not do much good. That, and the added safety, could make it worth the extra wait at airports.

By Kevin Bullis

Security in the Ether

In 2006, when Amazon introduced the Elastic Compute Cloud (EC2), it was a watershed event in the quest to transform computing into a ubiquitous utility, like electricity. Suddenly, anyone could scroll through an online menu, whip out a credit card, and hire as much computational horsepower as necessary, paying for it at a fixed rate: initially, 10 cents per hour to use Linux (and, starting in 2008, 12.5 cents per hour to use Windows). Those systems would run on "virtual machines" that could be created and configured in an instant, disappearing just as fast when no longer needed. As their needs grew, clients could simply put more quarters into the meters. Amazon would take care of hassles like maintaining the data center and network. The virtual machines would, of course, run inside real ones: the thousands of humming, blinking servers clustered in Amazon's data centers around the world. The cloud computing service was efficient, cheap, and equally accessible to individuals, companies, research labs, and government agencies.

But it also posed a potential threat. EC2 brought to the masses something once confined mainly to corporate IT systems: engineering in which Oz-like programs called hypervisors create and control virtual processors, networks, and disk drives, many of which may operate on the same physical servers. Computer security researchers had previously shown that when two programs are running simultaneously on the same operating system, an attacker can steal data by using an eavesdropping program to analyze the way those programs share memory space. They posited that the same kinds of attacks might also work in clouds when different virtual machines run on the same server.

In the immensity of a cloud setting, the possibility that a hacker could even find the intended prey on a specific server seemed remote. This year, however, three computer scientists at the University of California, San Diego, and

one at MIT went ahead and did it (see "Snooping Inside Amazon's Cloud" in above image slideshow). They hired some virtual machines to serve as targets and others to serve as attackers--and tried to get both groups hosted on the same servers at Amazon's data centers. In the end, they succeeded in placing malicious virtual machines on the same servers as targets 40 percent of the time, all for a few dollars. While they didn't actually steal data, the researchers said that such theft was theoretically possible. And they demonstrated how the very advantages of cloud computing--ease of access, affordability, centralization, and flexibility--could give rise to new kinds of insecurity. Amazon stressed that nobody has successfully attacked EC2 in this manner and that the company has now prevented that specific kind of assault (though, understandably, it wouldn't specify how). But what Amazon hasn't solved--what nobody has yet solved--is the security problem inherent in the size and structure of clouds.

Cloud computing--programs and services delivered over theInternet--is rapidly changing the way we use computers (see Briefing, July/August 2009, and "Clouds, Ascending" in above slideshow). Gmail, Twitter, and Facebook are all cloud applications, for example. Web-based

infrastructure services like Amazon's--as well as versions from vendors such as Rackspace--have attracted legions of corporate and institutional customers drawn by their efficiency and low cost. The clientele for Amazon's cloud services now includes the New York Times and Pfizer. And Google's browser and forthcoming operating system (both named Chrome) mean to provide easy access to cloud applications.

Even slow-moving government agencies are getting into the act: the City of Los Angeles uses Google's Apps service for e-mail and other routine applications, and the White House recently launched www.apps.gov to encourage federal agencies to use cloud services. The airline, retail, and financial industries are examples of those that could benefit from cloud computing, says Dale Jorgenson, a Harvard economist and expert on the role of information technology in national productivity. "The focus of IT innovation has shifted from hardware to software applications," he says. "Many of these applications are going on at a blistering pace, and cloud computing is going to be a great facilitative technology for a lot of these people."

Of course, none of this can happen unless cloud services are kept secure. And they are not

without risk. When thousands of different clients use the same hardware at large scale, which is the key to the efficiency that cloud computing provides, any breakdowns or hacks could prove devastating to many. "Today you have these huge, mammoth cloud providers with thousands and thousands of companies cohosted in them," says Radu Sion, a computer scientist at the State University of New York at Stony Brook. "If you don't have everybody using the cloud, you can't have a cheap service. But when you have everybody using the clouds, you have all these security issues that you have to solve suddenly."

Cloud Crises

Cloud computing actually poses several separate but related security risks. Not only could stored data be stolen by hackers or lost to breakdowns, but a cloud provider might mishandle data--or be forced to give it up in response to a subpoena. And it's clear enough that such security breaches are not just the stuff of academic experiments. In 2008, a single corrupted bit in messages between servers used by Amazon's Simple Storage Service (S3), which provides online data storage by the gigabyte, forced the system to shut down for several hours. In early 2009, a hacker who correctly guessed the answer to a Twitter employee's personal e-mail security question was able to grab all the documents in the Google Apps account the employee used. (The hacker gleefully sent some to the news media.) Then a bug compromised the sharing restrictions placed on some users' documents in Google Docs. Distinctions were erased; anyone with whom you shared document access could also see documents you shared with anyone else.

Andin October, a million T-Mobile Sidekick smart phones lost data after a server failure at Danger, a subsidiary of Microsoft that provided the storage. (Much of the data was later recovered.) Especially with applications

Cloud crowd: Some 4,000 servers hum at IBM’s cloud computing center in San Jose, CA. Credit: Jason Madara

To all this, the general response of the cloud industry is: clouds are more secure than whatever you're using now. Eran ­Feigenbaum, director of security for Google Apps, says cloud providers can keep ahead of security threatsmuch more effectively than millions of individuals and thousands of companies running their own computers and server rooms. For all the hype over the Google Docs glitch, he points out, it affected less than .05 percent of documents that Google hosted. "One of the benefits of the cloud was the ability to react in a rapid, uniform manner to these people that were affected," he says. "It was all corrected without users having to install any software, without any server maintenance.

Think about the ways security can be compromised in traditional settings, he adds: two-thirds of respondents to one survey admitted to having mislaid USB keys, many of them holding private company data; at least two million laptops were stolen in the United States in 2008; companies can take three to six months to install urgent security patches, often because of concern that the patches will trigger new glitches. "You can't get 100 percent security and still manage usability," he says. "If you want a perfectly secure system, take a computer, disconnect it from any external sources, don't put it on a network, keep it away from windows. Lock it up in a safe."

But not everyone is so sanguine. At a computer security conference last spring, John Chambers, the chairman of Cisco Systems, called cloud computing a "security nightmare" that "can't be handled in traditional ways." At the same event, Ron Rivest, the MIT computer scientist who coinvented the RSA public-key cryptography algorithm widely used in e-commerce, said that the very term cloud computing might better be replaced by swamp computing. He later explained that he meant consumers should scrutinize the cloud industry's breezy security claims: "My remark was not intended to say that cloud computing really is 'swamp computing' but, rather, that terminology has a way of affecting our perceptions and expectations. Thus, if we stop using the phrase cloud computing and started using swamp computing instead, we might find ourselves being much more inquisitive about the services and security guarantees that 'swamp computing providers' give us."


SNOOPING INSIDE AMAZON’S CLOUD
Researchers recently figured out a way to place malicious “virtual machines” on the servers hosting virtual machines assigned to intended victims in Amazon’s Elastic Compute Cloud, which they say could make it possible for an attacker to steal data. Amazon says it has since prevented this kind of attack and that the threat of data theft had only been theoretical. Here’s how the researchers did it.
1. The researchers hired “victim” virtual machines (VMs) from Amazon’s cloud and noted the machines’ IP addresses. They learned that if they bought multiple VMs at nearly the same time, those machines would have similar IP addresses, indicating that they were probably hosted on the same server. Credit: Bryan Christie Design



A similar viewpoint, if less colorfully expressed, animates a new effort by NIST to define just what cloud computing is and how its security can be assessed. "Everybody has confusion on this topic," says Peter Mell; NIST is on its 15th version of the document defining the term. "The typical cloud definition is vague enough that it encompasses all of existing modern IT," he says. "And trying to pull out unique security concerns is problematic." NIST hopes that identifying these concerns more clearly will help the industry forge some common standards that will keep data more secure. The agency also wants to make clouds interoperable so that users can more easily move their data from one to another, which could lead to even greater efficiencies.

Given the industry's rapid growth, the murkiness of its current security standards, and the anecdotal accounts of breakdowns, it's not surprising that many companies still look askance at the idea of putting sensitive data in clouds. Though security is currently fairly good, cloud providers will have to prove their reliability over the long term,

says Larry Peterson, a computer scientist at Prince­ton University who directs an Internet test bed called the PlanetLab Consortium. "The cloud provider may have appropriate security mechanisms," Peterson says. "But can I trust not only that he will protect my data from a third party but that he's not going to exploit my data, and that the data will be there five years, or 10 years, from now? Yes, there are security issues that need attention. But technology itself is not enough. The technology here may be out ahead of the comfort and the trust."

In a nondescript data center in Somerville, MA, just outside Boston, lies a tangible reminder of the distrust that Petersonis talking about. The center is owned by a small company called 2N+1, which offers companies chilled floor space, security, electricity, and connectivity. On the first floor is a collection of a dozen black cabinets full of servers. Vincent Bono, a cofounder of 2N+1, explains these are the property of his first client, a national bank. It chose to keep its own servers

rather than hire a cloud. And for security, the bank chose the tangible kind: a steel fence.

Encrypting the Cloud

Cloud providers don't yet have a virtual steel fence to sell you. But at a minimum, they can promise to keep your data on servers in, say, the United States or the European Union, for regulatory compliance or other reasons. And they are working on virtual walls: in August, Amazon announced plans to offer a "private cloud" service that ensures more secure passage of data from a corporate network to Amazon's servers. (The company said this move was not a response to the research by the San Diego and MIT group. According to Adam Selipsky, vice president of Amazon Web Services, the issue was simply that "there is a set of customers and class of applications asking for even more enhanced levels of security than our existing services provided.")

Meanwhile, new security technologies are emerging. A group from Microsoft, for example, has proposed a way to prevent users of one virtual machine on a server from gleaning information by monitoring the use of shared cache memory by another virtual machine on the same server, something that the San Diego and MIT researchers suggested was possible. And researchers at IBM have proposed a new kind of security mechanism that would, in essence, frisk new virtual machines as they entered the cloud. Software would monitor each one to see how it operates and ensure its integrity, in part by exploring its code. Such technologies could be ready for market within two or three years.

2. They realized that to increase the odds of placing a malicious VM on the same server as a victim VM, they would have to force a victim to hire a machine at a certain time. One way to do this, they say, would be to bombard the victim’s website with requests, forcing the victim to increase capacity. Credit: Bryan Christie Design



But fully ensuring the security of cloud computing will inevitably fall to the field of cryptography. Of course, cloud users can already encrypt data to protect it from being leaked, stolen, or--perhaps above all--released by a cloud provider facing a subpoena. This approach can be problematic, though. Encrypted documents stored in a cloud can't easily be searched or retrieved, and it's hard to perform calculations on encrypted data. Right now, users can get around these problems by leaving their information in the cloud unencrypted ("in the clear") or pulling the encrypted material back out to the safety of their own secure computers and decrypting it when they want to work with it. As a practical matter, this limits the usefulness of clouds. "If you have to actually download everything and move it back to its original place before you can use that data, that is unacceptable at the scale we face today," says Kristin Lauter, who heads the cryptography research group at Microsoft Research.

Emerging encryption technologies, however, could protect data in clouds even as users search it, retrieve it, and perform calculations on it. And this could make cloud computing far more attractive to industries such as banking and health care, which need security for sensitive client and patient data. For starters, several research groups have developed ways of using hierarchical encryption to provide different levels of access to encrypted

cloud data.

A patient, for example, could hold a master key to his or her own electronic medical records; physicians, insurers, and others could be granted subkeys providing access to certain parts of that information.

Ideally, we'd make it more practical to work with sensitive data that needs to be encrypted, such as medical records, so that unintended viewers couldn't see it if it were exposed by a hack or a glitch at the cloud provider. "The general theme of cloud computing is that you want to be able to outsource all kinds of functionality but you don't want to give away your privacy--and you need very versatile cryptography to do that," says Craig Gentry, a cryptography researcher at IBM's Watson Research Center in Yorktown, NY. "It will involve cryptography that is more complicated than we use today."

To find and retrieve encrypted documents, groups at Carnegie Mellon University, the University of California, Berkeley, and elsewhere are working on new search strategies that start by tagging encrypted cloud-based files with encrypted metadata. To perform a search, the user encrypts search strings using mathematical functions that enable strings to find matches in the encrypted metadata. No one in the cloud can

see the document or even the search term that was used. Microsoft Research recently introduced a theoretical architecture that would stitch together several crytographic technologies to make the encrypted cloud more searchable.

The problem of how to manipulate encrypted data without decrypting it, meanwhile, stumped researchers for decades until Gentry made a breakthrough early in 2009. While the underlying math is a bit thick, Gentry's technique involves performing calculations on the encrypted data with the aid of a mathematical object called an "ideal lattice." In his scheme, any type of calculation can be performed on data that's securely encrypted inside the cloud. The cloud then releases the computed answers--in encrypted form, of course--for users to decode outside the cloud. The downside: the process eats up huge amounts of computational power, making it impractical for clouds right now. "I think one has to recognize it for what it is," says Josyula Rao, senior manager for security at IBM Research. "It's like the first flight that the Wright Brothers demonstrated." But, Rao says, groups at IBM and elsewhere are working to make Gentry's new algorithms more efficient.


3. As the victim hired new VMs to handle the extra demand, the attacker also hired VMs. By checking IP addresses, the researchers found that the victims and attackers wound up on the same Amazon servers 40 percent of the time. Credit: Bryan Christie Design

Risks and Benefits

If cloud computing does become secure enough to

be used to its full potential, new and troubling issues may arise. For one thing, even clouds that are safe from ordinary hackers could become central points of Internet control, warns Jonathan Zittrain, the cofounder of Harvard's Berkman Center for Internet and Society and the author of The Future of the Internet--and How to Stop It. Regulators, courts, or overreaching government officials might see them as convenient places to regulate and censor, he says.

What's more, cloud providers themselves could crack down on clients if, say, copyright holders apply pressure to stop the use of file-sharing software. "For me," Zittrain says, "the biggest issue in cloud security is not the Sidekick situation where Microsoft loses your data." More worrisome to him are "the increased ability for the government to get your stuff, and fewer constitutional protections against it; the increased ability for government to censor; and increased ability for a vendor or government to control innovation and squash truly disruptive things."

Zittrain also fears that if clouds dominate our use of IT, they may turn into the kinds of "walled gardens" that characterized the Internet in the mid-1990s, when companies such as Compuserve, Prodigy, and AOL provided limited menus of online novelties such as news, e-commerce, and e-mail to the hoi polloi. Once people pick a cloud and applications they like, he says--Google Apps, for example--they may find they have limited access to great apps in other clouds, much as Facebook users can't network with people on MySpace.

But such concerns aren't stopping the ascendance of the cloud. And if cloud security is achieved, the benefits could be staggering. "There is a horrendous amount of computing and database management where cloud computing is clearly relevant," says Harvard's Dale Jorgenson. Imagine if today's emerging online repositories for personal health data, such as Google Health and Microsoft HealthVault, could link up with the growing number of electronic records systems at hospitals in a way that keeps private data protected at all times. The resulting medical megacloud could spread existing applications cheaply and efficiently to all corners of the medical profession. Doctors could easily compare patients' MRI scans, for example, with those of other patients around the country, and delve into vast databases to analyze the efficacy of treatments and prevention measures (see "Prescription: Networking," November/December 2009). "The potential there is enormous, because there are a couple of transformations that may occur in medicine in the near future from vast collections of medical records," says Ian Foster, a computer scientist who leads the Computation Institute at Argonne National Laboratory and the University of Chicago. Today, he points out, individuals are demanding access to their own medical information while medical institutions seek new sources of genomic and other data. "The two of those, together, can be powered by large-scalesharing of information," he says. "And maybe you can do it in the cloud. But it has particularly

challenging security problems."

This isn't the first time a new information technology has offered profound benefits while raising potentially intolerable security risks. The advent of radio posed similar issues a century ago, says Whitfield Diffie, one of the pioneers of public-key cryptography, who is now a visiting professor at Royal Holloway College at the University of London. Radio was so much more flexible and powerful than what it replaced--the telegraph--that you had to adopt it to survive in business or war. The catch was that radio can be picked up by anyone. In radio's case, fast, automated encryption and decryption technologies replaced slow human encoders, making it secure enough to realize its promise. Clouds will experience a similar evolution. "Clouds are systems," says NIST's Peter Mell. "And with systems, you have to think hard and know how to deal with issues in that environment. The scale is so much bigger, and you don't have the physical control. But we think people should be optimistic about what we can do here. If we are clever about deploying cloud computing with a clear-eyed notion of what the risk models are, maybe we can actually save the economy through technology."

David Talbot is Technology Review's chief correspondent.

4. Once the malicious VMs were on the same server as the victim’s VMs, the researchers were able to show they could monitor the victim’s use of computing resources. They said outright data theft would also be possible, though they didn’t take this step.
Source: Ristenpart et al, 2009. “Hey, you, get off of my cloud: exploring information leakage in third-party compute clouds.” In the Proceedings of the 16th ACM Conference on Computer and Communications Security. Credit: Bryan Christie Design


Cloud infrastructure: More and more computing services are being delivered over the Internet. Behind the technology are huge remote data centers like these two football-field-sized buildings that Google operates in The Dalles, OR, shown during their construction four years ago. Credit: Craig Mitchell Dyer/Getty Images


CLOUDS, ASCENDING
Traditional IT is complex to deploy and carries high overhead costs ...
Cost Breakdown


... but cloud computing services are highly efficient, which is one reason they’re growing fast.
Worldwide IT Spending Projections, In Billions
Sources: IDC

Until questions about privacy and security are dealt with, however, companies will continue to reserve cloud services for the least sensitive tasks.
How Public Cloud Services Are Used
Sources: 451 Group


By David Talbot

Applied Materials Moves Solar Expertise to China

The world's biggest supplier of solar-manufacturing equipment has opened a research and development center in China, and its chief technology officer will relocate from Silicon Valley to that country next month. Applied Materials, founded in 1967 as a semiconductor company, has manufactured in China for 25 years, but is expanding its presence to be closer to its customers and develop products suited to the country's urban population.

Glass power: Equipment at Applied Materials’s R&D center in Xi’an, China, processes glass panels 5.7 square meters to make solar cells.


"We're doing R&D in China because they're becoming a big market whose needs are different from those in the U.S.," says Mark Pinto, Applied Materials's CTO. Going forward, he says, "energy will become the biggest business for the company," and China, not the U.S., "will be the biggest solar market in the world."

Indeed, the move by Applied Materials is just the latest sign that China is rapidly moving to the forefront in adopting renewable energy technologies. China is no model for addressing climate change--its greenhouse-gas emissions are expected to nearly double by 2030. The lion's share of demand for photovoltaics comes from Europe, which accounted for 82 percent of the photovoltaics sold in 2008, according to a report by Solarbuzz. China currently makes up less than 1 percent of the demand for photovoltaics, but its demand for photovoltaics is expected to grow; Beijing aims to produce 20,000 megawatts of solar energy by 2020.

One factor in the move by Applied Materials is that China offers manufacturing incentives that aren't available for solar companies in the U.S. The tax credits in the federal stimulus package that passed in February were one of the first such incentives in a long time, says Pinto. But, he adds, "The location of factories of the kind we make isn't driven by cost." He lists the considerations that led to the company's expansion in China: "Where does the product get consumed, what is the cost of shipping, and what incentives does the government offer?" At the research center in Xi'an, a large city home to many universities, Applied Materials will work with local customers and researchers to develop new products.

Applied Materials, the world's largest maker of semiconductor- and display-manufacturing equipment, entered the solar market in 2006. In doing this, it built on its expertise in equipment for making large films of silicon, the material on which both display circuits and solar cells are based. By 2008, the company became the world's largest photovoltaic-equipment supplier.

Solar in China: Applied Materials engineers in the Solar Technology Center in Xi’an, China.


Its strategy, says Pinto, is to "help drive down costs through scaling." The company developed equipment for building amorphous-silicon solar cells on thin glass panels the size of a garage door, 5.7 square meters. These sheets can then be sliced into smaller panels or left as they are. Working at this scale saves money; Applied Materials made it work by developing equipment that can coat the huge panels with uniform silicon films just nanometers thick.

To compete in both the U.S. and Chinese markets, says Ken Zweibel, director of the George Washington Solar Institute, the company will need to increase the efficiency of the solar cells that can be made using its equipment. "Amorphous silicon has a relatively low cost, but its efficiency is the lowest of all the thin-film solar cells," says Zweibel. Cells made of cadmium telluride that are sold by U.S. thin-film company First Solar have an efficiency of around 11 percent. The amorphous-silicon solar cells made on Applied Materials's equipment are at just over 8 percent. "The 3 percent difference in efficiency is a 30 percent difference in terms of overall cost," Zweibel notes.

Pinto says research at the Xi'an center will focus on products suited to the way that country's population is concentrated, in cities with tall buildings. "Cities in China don't have very much rooftop" on which to place solar cells, says Pinto. The company plans to develop technologies such as electrochromic windows, which save heating and cooling costs by changing color with the weather. Solar cells embedded in windows might act both as a shade and an energy source. The company is also researching LED lighting and plans to work on thin-film batteries.

By Katherine Bourzac

A Quantum Leap in Battery Design

A "digital quantum battery" concept proposed by a physicist at the University of Illinois at Urbana-Champaign could provide a dramatic boost in energy storage capacity--if it meets its theoretical potential once built.

The concept calls for billions of nanoscale capacitors and would rely on quantum effects--the weird phenomena that occur at atomic size scales--to boost energy storage. Conventional capacitors consist of one pair of macroscale conducting plates, or electrodes, separated by an insulating material. Applying a voltage creates an electric field in the insulating material, storing energy. But all such devices can only hold so much charge, beyond which arcing occurs between the electrodes, wasting the stored power.

If capacitors were instead built as nanoscale arrays--crucially, with electrodes spaced at about 10 nanometers (or 100 atoms) apart--quantum effects ought to suppress such arcing. For years researchers have recognized that nanoscale capacitors exhibit unusually large electric fields, suggesting that the tiny scale of the devices was responsible for preventing energy loss. But "people didn't realize that a large electric field means a large energy density, and could be used for energy storage that would far surpass anything we have today," says Alfred Hubler, the Illinois physicist and lead author of a paper outlining the concept, to be published in the journal Complexity.

Hubler claims the resulting power density (the speed at which energy can be stored or released) could be orders of magnitude greater, and the energy density (the amount of energy that can be stored) two to 10 times greater than possible with today's best lithium-ion and other battery technologies.

What's more, digital quantum batteries could be fabricated using existing lithographic chip-manufacturing technologies using cheap, nontoxic materials, such as iron and tungsten, atop a silicon substrate, he says. The resulting devices would, in principal, waste little or no energy as they absorbed and released electrons. Hubler says it may be possible to build a benchtop prototype in one year.

Today, however, digital quantum batteries are merely a patent-pending research concept. Hubler has applied for Defense Advanced Research Projects Agency funding to develop such a prototype, but the concept presents significant challenges. It's not clear that the nanofabricated materials wouldn't break down once loaded with energy, says Joel Schindall, a professor of electrical engineering at MIT.

But Schindall also says the concept has merit. "I'm cautiously intrigued, because he does have some legitimate arguments for the fact that at these quantum dimensions, the energy storage effect is at least predicted to go up considerably," Schindall says. "The first challenge is: are his assumptions correct, or are there some other phenomena that haven't been looked at that get in the way?"

In some ways, the concept represents a variation on existing micro- and nanoelectronic devices. "If you look at it from a digital electronics perspective--it's just a flash drive," says Hubler. "If you look at it from an electrical engineering perspective, you would say these are miniaturized vacuum tubes like in plasma TVs. If you talk to a physicist, this is a network of capacitors."

The digital part of the concept derives from the fact that each nanovacuum tube would be individually addressable. Because of this, the devices could perhaps be used to store data, too.

Other methods exist for boosting the performance of capacitors. Advanced versions, called ultracapacitors, can store significant energy and operate more quickly by increasing the surface area of their electrodes and using an electrolyte. Schindall's group has increased the charge and discharge rates and storage capacity of traditional ultracapacitors by using carbon nanotubes instead of activated carbon on the electrode's surface. In essence, this increases the surface area of the electrode.

The advantages to Schindall's design--increased power output and energy density--could be crucial for applications like soaking up huge pulses of energy rapidly from a field of wind turbines or solar arrays, for example. Plus, his team has actually built a benchtop device. The downside is that the energy density of a given mass of material would still be somewhat lower than that of lithium-ion batteries.

While Hubler hasn't yet built anything, he notes that, in 2005, a group of Korean researchers showed that nanoscale capacitors could be fabricated. Hubler's device would still need billions or even trillions of such devices, however.

"I complete agree we desperately need new ways of storing electric energy," says Schindall. "Though it may be in competition with what I'm doing, I wish him the greatest of success and hope it works."

By David Talbot

The Year in Biomedicine

We may look back on 2009 as the year human genome sequencing finally became routine enough to generate useful medical information ("A Turning Point for Personal Genomes"). The number of sequenced and published genomes shot up from two or three to approximately nine, with another 40 or so genomes sequenced but not yet published. In a few cases, scientists have already found the genetic cause of a disorder by sequencing an affected person's genome.


Scientists have also sequenced the genomes of a number of cancers, comparing that sequence to patients' normal genome to find the genetic mistakes that might have caused the cells to become cancerous and to metastasize ("Sequencing Tumors to Target Treatment"). The results suggest that even low-grade and medium-grade tumors can be genetically heterogeneous, which could be problematic for molecularly targeted drugs. That points to a need to develop new strategies for drug development and treatment in cancer.

The year brought more good news for aging mice, and maybe humans, too, as scientists identified the first drug that can extend lifespan in mammals ("First Drug Shown to Extend Lifespan in Mammals"). Rapamycin, an antifungal drug currently used to prevent rejection of organ transplants, was found to boost longevity 9 to 13 percent even when it was given to mice that were the mouse equivalent of 60 years old. Previously, genetic engineering and caloric restriction--a nutritionally complete but very low-calorie diet--were the only proven methods of extending lifespan in mammals ("A Clue to Living Longer").

Because of its potent immunosuppressant effect, the drug isn't suitable for this application in humans. But researchers have already found that disrupting part of the same signaling pathway has similar life-extending benefits ("Genetic Fountain of Youth"). Mice with the relevant protein disabled showed superior motor skills, stronger bones, and better insulin sensitivity when they reached mouse middle age. Female mice lived about 20 percent longer than their unaltered counterparts. But male mice, while healthy, didn't have longer lifespans. (In comparison, caloric restriction boosts longevity by about 50 percent.) Scientists now aim to develop drugs that target this pathway, which is thought to act as a kind of gauge for the amount of food available in the environment.

The emergence in April of a new pandemic flu strain, H1N1, rapidly renewed interest in new approaches to making vaccines ("New Vaccines for Swine Flu"). For the first time during an active pandemic, pharmaceutical companies were able to use faster cell-based production methods to create vaccines against the virus, in addition to the traditional egg-based method. (None of these methods has yet been approved for use in the United States--the vaccine currently available was made in eggs.) In November, an advisory panel for the U.S. Food and Drug Administration declared that a novel method of producing flu vaccines in insect cells, while effective, needs more safety testing before it can be approved ("Caterpillar Flu Vaccine Delayed"). The vaccine, developed by Protein Sciences, based in Meriden, CT, uses a single protein from the virus to induce immunity, rather than a dead or weakened version of the virus. Two other companies began clinical trials of flu vaccines made from virus-like particles--protein shells that look just like viruses but do not contain viral DNA ("Delivering a Virus Imposter Quicker").

A new approach to brain surgery, tested by a Swiss team earlier this year, allows surgeons to burn out small chunks of brain tissue without major surgery using specialized sound waves ("Brain Surgery Using Sound Waves"). Neurosurgeons used a technology developed by InSightec, an ultrasound technology company headquartered in Israel. The method employs high-intensity focused ultrasound (HIFU) to target the brain. (HIFU is different than the ultrasound used for diagnostic purposes, such as prenatal screening, and has previously been used to remove uterine fibroids.) Beams from an array of more than 1,000 ultrasound transducers are focused through the skull onto a small piece of diseased tissue, heating it up and destroying it. In the study, nine patients with chronic debilitating pain reported immediate pain relief after the procedure.

Scientists also hope to co-opt the technologies developed for HIFU to modulate brain activity, using low intensity focused ultrasound to activate nerve cells ("Targeting the Brain with Sound Waves"). This approach might one day provide a less invasive alternative to deep-brain stimulation. This procedure, in which surgically implanted electrodes stimulate parts of the brain, is an increasingly common treatment for Parkinson's disease and other neurological problems.

In another first for the brain, scientists discovered this year that our IQ, or general intelligence, depends in large part on our white matter--the fatty layer of insulation that coats the neural wiring of the brain ("Brain Images Reveal the Secret to Higher IQ"). Using a type of brain imaging called diffusion tensor imaging, researchers analyzed the neural wiring in 92 pairs of fraternal and identical twins and found a strong correlation between the integrity of the white matter and performance on a standard IQ test. In addition, the researchers found that the quality of one's white matter is largely genetically determined. They are now searching for genetic variants tied to white matter and IQ.

A feature in the November issue of the magazine further explored the secret of intelligence, revealing that our smarts may be determined by the function and efficiency of the networks within the brain, rather than the number of neurons or the size of any particular region ("Intelligence Explained").

By Emily Singer

A Year of Stimulus for High Tech

A big chunk of February's $787 billion federal stimulus package--about $100 billion--is devoted to discovering, developing, and deploying new technologies ("Can Technology Save the Economy?"). But spending that money takes time, and, as the year closes, much of it has only just started to trickle out into the hands of researchers and industry.


The stimulus package was designed to boost technology in several key areas. It was intended to promote the adoption of electronic medical records, which could help patients in several ways ("The Benefits of Electronic Health Records" and "Prescription: Networking"). About $20 billion has been allocated so far and, of that, the U.S. Department of Health and Human Services has announced $1.2 billion in grants to develop electronic records.

The Recovery Act also allocated some $7.4 billion for increasing rural broadband Internet service. Little has happened so far, although the White House did announce $182 million for 18 projects this month. The U.S. Department of Agriculture, which was given $2.5 billion for broadband, hasn't yet published targets for distributing its money. And of the nearly $5 billion for broadband allocated to the U.S. Department of Commerce, several million has been awarded for preliminary studies, such as "to collect and verify the availability, speed, and location of broadband." The White House promises that broadband grants will total $2 billion in the next 75 days. The lion's share of stimulus funding for technology development--$60 billion--went to energy, largely in the form of grants and tax credits designed to spur renewable energy and energy efficiency. The bill was a windfall for the U.S. Department of Energy, which received $39 billion in addition to its annual budget of approximately $25 billion. The department's Office of Energy Efficiency and Renewable Energy (EERE), which typically gets less than $2 billion a year, won $16.8 billion, and has spent about half a billion so far. Almost all of that, $367 million, has gone to weatherization projects, which don't do much to advance technology, but could save a lot of energy. Some $2 billion was allocated for advancing battery manufacturing, which will be key to setting up an advanced battery industry in the United States. About a billion has been awarded so far, although that money has yet to be spent.

A new agency that received its first funding under the bill, the Advanced Research Projects Agency-Energy (ARPA-E), has already announced some award recipients. Of the $400 million the agency received, it's announced awards totaling $150 million, and has started taking applications for a second round of awards ("DOE's Agency Learns from Some Early Mistakes").

ARPA-E has funded several technologies with the potential to bring about big changes to the energy landscape. For instance, carbon nanotube ultracapacitors that could cut the cost of hybrids ("Ultracapacitor Startup Gets a Big Boost"), and carbon nanotube membranes that could make capturing carbon dioxide cheaper ("Carbon Capture with Nanotubes"). Another project could lead to a new kind of coal plant that doesn't release carbon dioxide, and yet doesn't raise the price of electricity ("Using Rust to Capture CO2 from Coal Plants").

Several new battery technologies could have a similarly transformative impact. New sodium-ion batteries ("Sodium-Ion Cells for Cheap Energy Storage") and liquid batteries could make storing renewable energy affordable, while metal-air batteries offer the promise of cheap electric vehicles that can go hundreds of miles on a single charge ("Betting on a Metal-Air Battery Breakthrough"). The DOE's Office of Science is also funding a number of programs exploring cutting-edge energy science, and has announced $277 million in stimulus funding for 46 "Energy Frontier Research Centers."

Meanwhile, the DOE has started to issue some of the $125 billion in loans and loan guarantees it's in charge of. For example, solar panel maker Solyndra received a $535 loan guarantee. The DOE has announced a total of $8.5 billion in loans for developing advanced vehicles. These will go to Ford, Nissan, and two small companies: the electric car maker Tesla Motors and the plug-in hybrid maker Fisker Automotive.

By Kevin Bullis

The Year in Materials

For years now, people have been talking up carbon nanotubes and their potential to be used for far-out applications including strong space-elevator cables, robust electrical transmission lines, and high-performance nanotube computers. These things may still be a decade off, but several advances this year make them sound less like fantasies.

Researchers at Rice University refined methods for spinning acid solutions of carbon nanotubes into fibers hundreds of meters long ("Making Carbon Nanotubes into Long Fibers"). Their process, which could be used industrially (it's similar to how Kevlar is made), is the culmination of eight years of work begun by the late Richard Smalley, who shared the Nobel Prize in chemistry in 1996 for the discovery of carbon nanomaterials ("Wires of Wonder"). In order to make electrical transmission lines, researchers still need to perfect a process for growing pure batches of metallic nanotubes. Today they come out mixed with semiconducting tubes, and the two must be separated. Still, the Rice demonstration of making nanotubes into large structures is a major accomplishment.

On the nanotube electronics front, the year started out strong. The company Unidym, which makes transparent electrodes from carbon nanotubes, demonstrated its products, and companies including Samsung tested them in displays ("Clear Carbon-Nanotube Films"). Unidym's nanotube films could be incorporated into flexible displays and replace the expensive, brittle materials currently used to make display electrodes.

The year also brought major accomplishments in making more sophisticated nanotube devices for displays, including the integrated circuits that drive them ("Practical Nanotube Electronics"). One of the major advantages of nanotube display circuits is that they could be printed like newspaper, which should bring down costs. And this month at the International Electron Devices Meeting, researchers at Stanford presented the first three-dimensional nanotube circuits ("Complex Integrated Circuits Made of Carbon Nanotubes"). The processes their nanotube circuits can carry out, like adding and storing numbers, are about as sophisticated as what silicon could do in the mid-1960s.

Meanwhile, researchers at Cornell made an interesting basic science demonstration: single nanotubes can be wired up to make extremely efficient solar cells ("Superefficient Solar from Nanotubes"). While conventional solar materials can only produce one electron per striking photon, carbon nanotubes can produce two if the photon has enough energy.

Nanomaterials, Big Energy

Activity in academic labs and companies this year showed that energy-storage materials structured on the nanoscale have a greater capacity than their conventional counterparts. This concept launched two startups that received government funding. FastCAP Systems of Cambridge, MA, is developing ultracapacitors based on carbon nanotubes, which can store a large amount of electrical charge because of their huge surface area ("Ultracapacitor Startup Gets a Big Boost"). The company received an ARPA-E grant for $5.35 million over two-and-a-half years. And Amprius of Menlo Park, CA, received $3 million from the National Institute of Standards and Technology to develop high-performance lithium-ion battery anodes made from silicon nanowires ("More Energy in Batteries"). Both companies are aiming for the electric-car market, hoping to make energy-storage devices that will allow cars to run longer between charges.

Nanomaterials continued to prove their promise in solar cells. Researchers determined that solar cells patterned with nanoscale pillars can convert more energy than smooth ones. The upshot: the performance of cheap materials can be boosted without adding expense, and it's possible to make them on aluminum foil ("Nanopillar Solar Cells"). This work was done at the University of California, Berkeley, by one of Technology Review's 35 young innovators of 2009, Ali Javey. Another Berkeley researcher on our young innovators list, Cyrus Wadia, analyzed the abundance and properties of unconventional solar materials and then made strides in developing them. One of the materials is pyrite, also known as fool's gold, which Wadia is growing as nanocrystals ("Mining Fool's Gold for Solar"). The advantage of nanocrystals for solar cells is that they can be made into inks and cheaply printed. Meanwhile, one of Wadia's mentors, Paul Alivisatos, interim director of the Lawrence Berkeley National Laboratory, founded a company to develop high-efficiency, low-cost solar cells based on nanomaterials. Solexant of San Jose, CA, hopes to sell printed nanocrystal solar cells with 10 percent efficiencies for $1 per watt ("Thin-Film Solar with High Efficiency").

Harvesting Energy from Strange Sources

Cheap, yet efficient, solar cells could help wean us from dirty electricity sources. But that's not what Zhong Lin Wang at Georgia Tech has in mind. Technology Review recognized his work on nanopiezotronics, nanowires and other structures that convert mechanical stress into electrical currents, in our special section on the 10 emerging technologies of the year. Zinc-oxide nanowires that produce an electrical current when stressed could provide a power source for implantable diagnostics and stress sensors embedded in buildings and bridges. These materials could also be woven into an iPod-charging jacket that harvests the small amount of energy produced by the rustling of fabric as you walk. A series of papers produced by Wang throughout the year further established the concept, showing that nanopiezotronics could harness the energy produced by a running hamster ("Harnessing Hamster Power with a Nanogenerator"), that they could act as stress sensors ("Nanosensing Transistors Controlled by Stress"), and that they could be combined with solar cells in a hybrid nanogenerator ("A Hybrid Nano-Energy Harvester").

Optical Materials Advance

Part of the 2009 Nobel Prize in physics went to a researcher whose work formed the basis of modern telecommunications. Charles Kuo figured out why optical fibers that had been made in the lab in the 1960s weren't working: the material contained impurities that attenuated the signal. Based on this finding, Kuo determined that pure glass could realize the potential of optical data transmission to speed the flow of information ("Nobel for Revolutionary Optical Technologies"). Modern optical fibers are even better than what Kao predicted, losing just 5 percent of the light over a distance of a kilometer; today there are over one billion kilometers of optical fiber around the world, with more being added each day.

And this year Intel announced its intention to replace copper wires used to carry data between your MP3 player, laptop, and other devices with optical fiber ("Intel's Plan to Replace Copper Wires"). In 2010, the company will ship Light Peak data cables that will zip 10 gigabits of data per second between gadgets using light, which is much faster than electrons.

A fundamental advance in optics this year was the fabrication of an extremely small laser that may eventually be developed into a compact light source for optical computers ("The Smallest Laser Ever Made"). Optical devices can operate at hundreds of terahertz, compared to the 10 gigahertz speeds of the best consumer electronics. But optical devices are difficult to miniaturize. The "nano laser," made by researchers at Cornell, Purdue, and Norfolk State universities, helps overcome this problem.

Ultradense Data Storage and Biodegradable Electronics

The year also brought innovative materials for storing more data for longer periods of time. Layers of gold nanorods that polarize light and reflect different colors can store data in five dimensions ("Five-Dimensional Data Storage"); optical antennas that focus light to tiny, intense spots to heat bits could extend the lifetime of magnetic data storage ("Heating Up Magnetic Memory"); and researchers at HP looked to magnetic nanowires to make racetrack memory ("TR10: Racetrack Memory").

Meanwhile, researchers in Japan demonstrated the first plastic, flexible flash memory device, which might be incorporated into unconventional electronics such as disposable sensors ("Cheap, Plastic Memory for Flexible Devices"). Researchers in Illinois worked wonders with silicon to make electronics that took strange forms, including flexible LED arrays ("Cheaper LEDs") and biocompatible electronics ("Implantable Silicon-Silk Electronics") that could drive medical implants. These projects were led by John Rogers, who invented a stretchable silicon technology we recognized in 2006 ("Stretchable Silicon"). And at Stanford, researchers made the first fully biodegradable transistors, which might control drug delivery in future medical implants ("Biodegradable Transistors").

By Katherine Bourzac

Synthetics Stop the Bleeding

Nanoparticles designed to mimic the clotting capability of blood platelets have been shown to quickly reduce bleeding in rodents with severed arteries. The synthetic particles, which stick to the body's own platelets, stanch bleeding more effectively than a clotting drug currently used to stem uncontrolled blood loss. "We're helping to form the clot," says Erin Lavik, a bioengineer at Case Western University in Cleveland, who led the research.

If successful in further tests, researchers hope the nanoparticles could one day be injected soon after a traumatic injury by paramedics, or in the battlefield. Early safety tests are promising, but developing safe blood-clotting treatments has been a challenge. "There's a balance between the two edges of the sword--bleeding too much and clotting too much," says Mortimer Poncz, a physician at the University of Pennsylvania Medical School, in Philadelphia, who was not involved in the research. "You don't want to stop bleeding in the leg but die of a heart attack or have stroke."

Uncontrolled bleeding is a major cause of trauma-related death. Existing methods of stemming blood loss are largely limited to treating open wounds or for use in the operating room. None have proven effective in stanching internal bleeding prior to arrival in a hospital.

After a traumatic injury, the body launches its own clotting cascade by activating platelets. These disc-shaped blood cells transform into spiky, sticky cells that adhere to each other and to molecules at the injury site, forming a blood clot. Physicians can already enhance the clotting process with drugs or materials that incorporate molecules in the clotting cascade. One such drug is NovoSeven, a synthetic protein derived from a human gene. But this drug is enormously expensive, costing $10,000 to $30,000, and some trauma surgeons question its effectiveness.

Attempts to mimic platelets themselves have so far been unsuccessful. Scientists have engineered red blood cells and blood-specific proteins to bind to platelets, "but those particles can build up in capillary beds, increasing the potential for [dangerous blood clots]," says Lavik.

Lavik and collaborator James Bertram, a graduate student at Yale, have now developed a nanoparticle small enough to flow through capillaries unfettered. It also has a platelet's specific stickiness. The particle is about a third of the size of a normal platelet. Each particle has a polymer core that's coated with polyethylene glycol (PEG)--a water-soluble molecule that keeps them from sticking to each other or to the blood vessels. The PEG molecules are also topped with a peptide sequence that binds to activated platelets. "People had previously shown that activated platelets bind to [this sequence], so we optimized the chemistry to expose the molecule, presenting them to activated platelets," says Lavik, who was recognized by Technology Review as a TR35 Young Innovator in 2003.

When injected into the bloodstream of rats with a nick in the femoral artery--the large artery in the thigh muscles--the nanoparticles bound activated platelets at the injury site. The treatment halved bleeding time the in rats from about four minutes to two, proving more effective than NovoSeven. The research was published this week in the journal Science Translational Medicine.

Lavik says the two treatments might prove to be complementary. NovoSeven, she says, "works to help build the fibrin mesh network that's critical in building clot. Perhaps the synthetic platelets could help start building the clot and the drug might help stabilize it."

"It sounds like it has the potential to be useful for controlling bleeding on the battlefield," says John Weisel, a biologist at Penn Medical School, who was not involved in the research.

Early studies suggest the nanoparticles are safe, a major issue for treatments that enhance blood clotting. By studying fluorescently labeled versions of the nanoparticles, researchers found that the particles are easily cleared from the body. And the particles do not accumulate in noninjured tissue, such as the lungs or kidneys, to form dangerous clots. At very high doses--a concentration almost too thick to move through the syringe, says Lavik--the particles did trigger breathing issues in some animals. But such a high dose isn't necessary to generate blood-clotting benefits, she says.

Nevertheless, extensive testing is needed before the particles can be used in humans. "The early research looks very promising, but the human system is different than a rat's," says Rutledge Ellis-Behnke, a researcher at MIT. "Care has to be taken that these do not coat the inside of the lungs and reduce the amount of oxygen transfer into red blood cells."

Researchers plan to test the particles in larger animals, which more closely approximate the human circulatory system, as well as in different types of injuries, such as those that mimic the blast injuries that are particularly common among troops in Iraq and Afghanistan.

By Emily Singer

Hot Electrons Could Double Solar Power

For decades researchers have investigated a theoretical means to double the power output of solar cells--by making use of so-called "hot electrons." Now researchers at Boston College have provided new experimental evidence that the theory will work. They built solar cells that get a power boost from high-energy photons. This boost, the researchers say, is the result of extracting hot electrons. The results are a step toward solar cells that break conventional efficiency limits. Because of the way ordinary solar cells work, they can, in theory, convert at most about 35 percent of the energy in sunlight into electricity, wasting the rest as heat. Making use of hot electrons could result in efficiencies as high as 67 percent, says Matthew Beard, a senior scientist at the National Renewable Energy Laboratory in Golden, CO, who was not involved in the current work. Doubling the efficiency of solar cells could cut the cost of solar power in half.

Hot solar: This solar cell is made of thin layers of amorphous silicon with aluminum dots serving as back electrical contacts. It provides evidence that it may be possible to double the output of solar cells.


Conventional solar cells can only efficiently convert the energy of certain wavelengths of light into electricity. For example, when a solar cell optimized for red wavelengths of light absorbs photons of red light, it produces electrons with energy levels similar to those of the incoming photons. When the cell absorbs a higher-energy blue photon, it first produces a similarly high-energy electron--a hot electron. But this loses much of its energy very quickly as heat before it can escape the cell to produce electricity. (Conversely, cells optimized for blue light don't convert red light into electricity, so they sacrifice the energy in this part of the spectrum.)

The Boston College researchers made ultra-thin solar cells just 15 nanometers thick. Because the cells were so thin, the hot electrons could be pulled out of the cell quickly, before they cooled. The researchers found that the voltage output of the cells increased when they illuminated them with blue light rather than red. "Now we're getting the electrons from the blue light out before they lose all of their excess energy," says Michael Naughton, a professor of physics at Boston College.

The problem is that because they're so thin, the solar cells let most of the incoming light pass through them. As a result, they convert only 3 percent of the energy in incoming light into electricity. "I think it's promising," Beard says. But he adds that so far they're only showing "a pretty small effect."

Naughton says that his team plans to address this problem using nanowires. The basic idea, put forward by many different researchers now, is to make forests of nanowires that will absorb light along their lengths. And because each nanowire is thin, the electrons won't have far to travel to escape to a conductive layer on its surface. This could make it possible to replicate the hot electron effect seen in the thin solar cells. Naughton and colleagues are commercializing such nanowires via a startup called Solasta, based in Newton, MA, which is being funded by the respected venture capital firm Kleiner Perkins Caufield & Byers.

The researchers also hope to increase the number of hot electrons they collect from the absorbed light. To do this, they are turning to an approach taken by Martin Green, a professor at the University of New South Wales in Australia and a leader in using hot electrons in solar cells. This method involves incorporating a layer of quantum dots, which act as a sort-of filter, selectively extracting higher-than-normal-voltage electrons, Beard says. Naughton says that Solasta has already demonstrated that it's possible to incorporate such quantum dots into the company's nanowires.

By Kevin Bullis

Rethinking Voice as an App

A new Internet protocol (IP) voice network was launched this week by Bandwidth.com of Cary, NC. The company hopes to attract businesses interested in advanced features, such as the ability to make one phone number ring several phones, but who don't want to pay for all the old phone infrastructure. It already has one very big customer: Google Voice.



Bandwidth.com's chief technology officer, T.R. Missner, says the fundamental advantage of his company's network is that it's not bound by any legacy technologies, such as switches, used by the traditional phone system. Missner says that switches can create idiosyncracies in a network, leading to better or worse service depending on geography. But an all-IP voice network, such as Bandwidth.com's FlexNetwork, should offer consistent services everywhere, he says.

Though Bandwidth.com says it built this network and owns it, in today's world, that doesn't mean the company laid any cable. Instead, it involves weaving together a system of Internet circuits, IP routing technologies, and connections to other telephone providers. "The hardest part and the longest pull is the interconnections with all of the various [carriers]," Missner says.

The FlexNetwork also offers interfaces that customers can use to write voice applications that take advantage of the network's features. It can provide its customers with local phone numbers in any part of the U.S., for example. Bandwidth.com makes money by charging customers for those phone numbers, or minutes served.

Missner declined to talk specifically about Google Voice, but FCC filings from earlier this year reveal that Bandwidth.com provides the backbone for the service, which offers users one number that can be used to reach all their phones, and provides a number of additional features such as voice-mail transcription and the ability to receive text messages as e-mails.

Ifbyphone, a phone automation services company based in Skokie, IL, also uses Bandwidth.com's network. Ifbyphone CEO Irv Shapiro says that the introduction of all-IP networks allows businesses to "treat the telephone like we treat a Web browser." He explains that, just as companies can build applications that automate services for their customers through a Web browser, similar applications can now be delivered over the phone.

Users have encountered voice applications before, of course, such as the interactive voice response systems used by airlines to provide basic flight information. The difference, Shapiro says, is that in the past, a company would have had to buy a specialized phone switch and hundreds of thousands of dollars in related equipment to offer such an application. VoIP allows similar applications to be delivered over the network, much as cloud computing services are delivered to companies over the Internet.

For VoIP networks, Shapiro notes, "it used to be that the assumption was that the service didn't have to be as reliable." Ifbyphone was attracted to the FlexNetwork because of Bandwidth.com's promises of a robust network, he says.

The company's claims of reliable nationwide coverage do suggest something "a little unique" for VoIP networks, says Rob Enderle, founder of the research company Enderle Group. "Most VoIP companies provide a centralized service that you can access anyplace, but they're not in the position to ensure quality of service because the network isn't theirs," he says. "If [Bandwidth.com] can actually do that, that would be an advantage for them."

Enderle adds that IP voice networks enable a variety of features that are otherwise prohibitively complex. They lend themselves, for example, to combining voice mail and e-mail, creating a universal in-box for several phone numbers, or adding lots of users to a conference call. (Hardware limitations can make this difficult on traditional systems.) "In terms of convergence," he says, "it makes any of those next-generation things vastly easier to do because you're not trying to juggle an analog network and a data network that for the most part don't work well together."

Missner says the company worked to bridge between different types of networks and technologies to create hybrid features such as the ability to send text messages from phones that are considered landlines, or send text messages via instant messaging programs.

"When you think of voice as an application, there's a lot of innovation that becomes possible," he says. "Having a pure IP voice network is a catalyst."

By Erica Naone

Sun-Assisted Desalination

A Canadian startup has built a pilot desalination plant in Vancouver that uses a quarter of the energy of conventional plants to remove salt from seawater. The process relies on concentration gradients, and the natural tendency of sodium and chloride ions--the key components of salt--to flow from higher to lower salinity concentrations. If the system can be scaled up it could offer a cheaper way to bring drinking water to the planet's most parched regions while leaving behind a much lower carbon footprint than other desalination methods.

Spray pond: Natural sunlight is used to evaporate and concentrate the salt water in seawater.


"We've taken it from a benchtop prototype to a fully functional seawater pilot plant," says inventor Ben Sparrow, a mechanical engineer who established Saltworks Technologies in 2008 to commercialize the process. "The plant is currently running on real seawater, and we're in the final stage of expanding it to a capacity of 1,000 liters a day."

Today most desalination plants are based on one of two approaches. One is distillation through an evaporation-condensation cycle, and the other is membrane filtration through reverse osmosis. But both options are energy-intensive and costly.

Saltworks takes a completely different approach based on the principles of ionic exchange. The process begins with the creation of a reservoir of seawater that is evaporated until its salt concentration rises from 3.5 percent to 18 percent or higher.

The evaporation is done in one of two ways: either the seawater is sprayed into a shallow pond exposed to sunlight and dry ambient air, or seawater is kept in a large tower that's exposed to waste heat from a neighboring industrial facility. The second approach is used in the commercial-scale plant. The concentrated water is then pumped at low pressure into the company's desalting unit along with three separated streams of regular seawater. At this point the most energy-intensive part of the process is already over. Inside the desalting unit, which in the pilot plant is about the size of a microwave oven, specially treated polystyrene troughs connect two of the regular seawater streams to the highly concentrated stream. Positive ions (largely sodium) and negative ions (largely chloride) are drawn by diffusion through the polystyrene, which has been chemically treated to manipulate specific ions, from the concentrated steam into the weaker ones. One trough is treated to allow only positively charged ions to pass, while the other bridge only allows negatively charged ions to pass. But both allow other ions in salt water, including magnesium, calcium, sulfate, and bromine ions, to pass through. "The negatives all flow in one direction and the positives all flow in another direction," Sparrow says.

Hold the salt: The spray pond and associated equipment operating in a prototype system in British Columbia.


The two regular streams--one now having a surplus of positive ions and the other having a surplus of negative ions--are also connected to the third saltwater stream, which is the target for final purification. The two out-of-balance streams want to become balanced again, so they essentially strip the third stream of all positive and negative ions. The end result is de-ionized water that only requires some basic chlorination or ultraviolet treatment before being piped into homes and businesses.

Sparrow, who is also chief executive of Saltworks, says the process uses low-pressure pumps to circulate the water, meaning lightweight plastic pipes can be used instead of corrosion-resistant steel. Saltworks cofounder and president Joshua Zoshi says scaling up the system should be simple because the plastics and ion-selective chemicals used are plentiful and cheap. "Our next step is to engage with industry and work with potential customers to get the technology out into the field," Zoshi says.

Much of the research and pilot-plant funding to date has come from Canada's National Research Council, B.C. Hydro's Powertech Labs, and Sustainable Development Technology Canada, a federal agency that supports clean technology development through grants.

Rick Whittaker, chief technology officer at SDTC, says the company has a reasonable chance of success because the science behind it is sound and the approach is based largely on the creative integration of existing technologies. "There's technical risk," says Whittaker. "But we're quite confident they can scale it up."

By Tyler Hamilton

Complex Integrated Circuits Made of Carbon Nanotubes

The first three-dimensional carbon nanotube circuits, made by researchers at Stanford University, could be an important step in making nanotube computers that could be faster and use less power than today's silicon chips. Such a computer is still at least 10 years off, but the Stanford work shows it is possible to make stacked circuits using carbon nanotubes. Stacked circuits cram more processing power in a given area, and also do a better job dissipating waste heat.

A recent IBM study showed that for a given total power consumption, a circuit made from carbon nanotubes is five times faster than a silicon circuit. "We can make silicon transistors smaller and smaller, but at extremely small dimensions they don't show the desired performance anymore," says Zhihong Chen, manager of carbon technology at the IBM Watson Research Center. "We are looking to alternative materials that can be scaled more aggressively but still maintain device performance."

Carbon circuit: This carbon-nanotube circuit, which is a memory element, is one of the many possible designs that can be made with new methods.


Researchers have had great success in making single nanotube transistors in the lab, but scaling them up to make complex circuits has been difficult because it's impossible to control the quality of every single nanotube. The Stanford circuit designs, which were presented last week at the International Electron Devices Meeting in Baltimore, make it possible to create more complex nanotube circuits in spite of the material's limitations.

"When we deal with a large number of nanoscale components, we can't demand everything to be perfect," says H.-S. Philip Wong, professor of electrical engineering at Stanford. When the Stanford researchers grow arrays of nanotubes to make circuits, they get a mix of semiconducting nanotubes and metallic nanotubes that will cause electrical shorts if they're not eliminated. Some of the nanotubes grow in straight lines, but some are squiggly, and these must also be worked around. While chemists work on methods for growing straight, pure nanotubes, the Stanford researchers' question, Wong says, is, "How do we mitigate that and make sure the system still works?"

The answer is to account for materials limitations in the circuit designs. "We have to find a way to build with the metallic nanotubes so that they don't make trouble," says Subhasish Mitra, professor of electrical engineering and computer science at Stanford. The Stanford group first makes what Mitra calls a "dumb" layout. Using a stamp, researchers transfer a flat-lying, aligned array of carbon nanotubes grown on a quartz substrate to a silicon wafer. They then top the nanotubes with metal electrodes. At the surface of the wafer, between the silicon and the nanotubes, is an insulating layer that acts as a back gate, allowing the researchers to switch the semiconducting nanotubes off before using the metal electrodes to burn out the metallic nanotubes with a blast of electricity. A top gate is added that's patterned in such a way that it won't connect with any misaligned tubes. The circuits are then etched to remove metal electrodes that aren't needed for the final circuit design.

To make a three-dimensional circuit, the researchers simply repeat the stamping and electrode-growth procedures to stack as many layers as are needed before the final etching process. The nanotube stamping process, which the Stanford group first demonstrated last year, is key to creating stacked layers because it can be done at low temperatures that don't melt the metal electrical contacts in underlying layers.

While materials scientists are still working on how to grow batches of carbon nanotubes where every single one is semiconducting, the Stanford group is working around the problem. "Instead of burning out one tube at a time, they do it at the circuit level, then design the circuits smartly to get around the burned-out tubes," says IBM's Chen.

"They've demonstrated small, simple circuits, like what was done in the mid-1960s with silicon," says Shekhar Borkar, an Intel fellow and director of the company's microprocessor technology lab. The Stanford group has made, for example, a simple calculator that can add and store numbers.

The Stanford group is currently working to make ever more complex integrated circuits. "So far as complexity is concerned, there is fundamentally no barrier" on carbon nanotubes, says Mitra. Materials barriers remain, however. The Stanford nanotube arrays are some of the densest ever made, with five to 10 nanotubes per micrometer, but this isn't enough. "We need 100 nanotubes per micrometer to get really good performance," says Wong.

By Katherine Bourzac