Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........


Showing posts with label COMPUTER AND MATH. Show all posts
Showing posts with label COMPUTER AND MATH. Show all posts

Thanks for the Transparent Memories: Progress in Quest for Reliable, Flexible Computer Memory for Transparent Electronics

This happens even when the work of art is on the nanoscale, where individual things make a strand of hair look like a redwood by comparison.

More than four years ago, a collection of students, postdoctoral researchers and professors at Rice University found themselves chiseling down into a mystery involving two of the most basic, common elements on Earth: carbon and silicon oxide.

Using graphene as crossbar terminals, Rice researchers are following through on groundbreaking research that shows silicon oxide, one of the most common materials on Earth, can be used as a reliable computer memory. The memories are flexible, transparent and can be built in 3-D configurations.


The group led by chemist James Tour came to a discovery of note: that it was possible to make bits of computer memory from those elements, but make them much smaller and perhaps better than anything on the market even today.

From that first revelation in 2008 to now, Tour and his team have steadily advanced the science of two-terminal memory devices, which he fully expects will become ubiquitous in the not-too-distant future.

The latest dispatch is a paper in the journal Nature Communications that describes transparent, non-volatile, heat- and radiation-resistant memory chips created in Tour's lab from those same basic elements, silicon and carbon. But a lot has happened since 2008, and these devices bear only a passing resemblance to the original memory unit.

In the new work, Tour and his co-authors detail their success at making memory chips from silicon oxide sandwiched between electrodes of graphene, the single-atom-thick form of carbon.

Even better, they were able to put those test chips onto flexible pieces of plastic, leading to paper-thin, see-through memories they hope can be manufactured with extraordinarily large capacities at a reasonable price. Think about what that can do for heads-up windshields or displays with embedded electronics, or even flexible, transparent cellphones.

"The interest is starting to climb," said Tour, Rice's Rice's T.T. and W.F. Chao Chair in Chemistry as well as a professor of mechanical engineering and materials science and of computer science. "We're working with several companies that are interested either in getting their chips to do this kind of switching or in the possibility of making radiation-hard devices out of this."

In fact, samples of the chips have climbed all the way to the International Space Station (ISS), where memories created and programmed at Rice are being evaluated for their ability to withstand radiation in a harsh environment.

"Now, we've seen a couple of DARPA announcements asking for proposals for devices based on silicon oxide, the very thing we've shown. So there are other people seeing the feasibility of this approach," Tour said.

It wasn't always so, even if silicon oxide "is the most studied material in humankind," he said.
"Labs in the '60s and '70s that saw the switching effect didn't have the tools to understand what they were looking at," he said. "They didn't know how to exploit it; they called it a soft breakdown in silicon. To them, it was something bad."

In the original work at Rice, researchers put strips of graphite, the bulk form of carbon best known as pencil lead, across a silicon oxide substrate and noticed that applying strong voltage would break the carbon; lower voltages would repeatedly heal and re-break the circuit. They recognized a break could be a "0″ and a healed circuit a "1." That's a switch, the most basic memory state.

Manufacturers who have been able to fit millions of such switches on small devices in the likes of flash memory now find themselves bumping against the physical limits of their current architectures, which require three wires -- or terminals -- to control and read each bit.

But the Rice unit, requiring only two terminals, made it far less complicated. It meant arrays of two-terminal memory could be stacked in three-dimensional configurations that would vastly increase the amount of information a chip could hold.

And best of all, the mechanism that made it possible turned out not to be in the graphite, but the silicon oxide. In the breakthrough 2010 paper that followed the 2008 discovery, the researchers led by then-graduate student Jun Yao found that a strong jolt of voltage through a piece of silicon oxide stripped oxygen atoms from a channel only 5 nanometers wide, turning it into pure silicon. Lower voltages would break the channel or reconnect it, repeatedly, thousands of times.

"Jun was the first to recognize what he was seeing," Tour said. "Nobody believed him, though (Rice physicist) Doug Natelson said, 'You know, it's not out of the realm of possibility.' The people on the graphitic memory project were not at all excited about him saying this and they argued with Jun tooth and nail for a couple of years."

Yao struggled to convince his lab partners the switching effect wasn't due to the breaking graphite but to the underlying crystalline silicon. "Jun quietly continued his work and stacked up evidence, eventually building a working device with no graphite," Tour said. Still, he recalled, Yao's colleagues suspected that carbon in the system skewed the results. So he demonstrated another device with no possible exposure to carbon at all.

Yao's revelation became the basis for the next-generation memories now being designed in Tour's lab, where silicon oxides sandwiched between graphene layers are being attached to plastic sheets. There's not a speck of metal in the entire unit (with the exception of leads attached to the graphene electrodes). And the eye can see right through it.

"Now we're making these memories with about an 80 percent yield of working devices, which is pretty good for a non-industrial lab," Tour said. "When you get these ideas into industries' hands, they really sharpen it up."

The idea of transparency came later. "Silicon oxide is basically the same material as glass, so it should be transparent," Tour said. Graphene sheets, single-atom-thick carbon honeycombs, are almost completely transparent, too, and tests detailed in the new paper showed their ability to function as crossbar electrodes, a checkerboard array half above and half below the silicon oxide that creates a circuit where the lines intersect.

The marriage of silicon and graphene would extend the long-recognized utility of the first and prove once and for all the value of the second, long touted as a wonder material looking for a reason to be.
"It was a very rewarding experience," said Yao, now a postdoctoral researcher at Harvard, of his work at Rice. "I feel grateful that I stumbled on this, had the support of my advisers and persisted."

By good fortune, Yao was the rare graduate student with three advisers. As confusing as that may have seemed at the start of his Rice career, it was luck those advisers were digital systems expert Lin Zhong and condensed matter physicist Natelson, both rising stars in their fields, and Tour, a renowned chemist.

Each made important contributions to the project as it progressed. "Doug had very acute intuition about the underlying mechanism, and we constantly turned to Lin for his advice on the electronic architecture," Yao said.

Getting his story on Page 1 of the New York Times was enough of a thrill, but another was ahead as NASA decided to include samples of his chip in an experimental package bound for the space station. The day of Yao's planned departure for his postdoctoral job in Cambridge, Aug. 24, 2011, was to be the best of all as the HIMassSEE project lifted off from Central Asia aboard a cargo flight to the ISS. Minutes later, the unmanned craft crashed in Siberia.

Nearly a year later, a new set of chips made it to the ISS, where they will stay for two years to test their ability to hold a pattern when exposed to radiation in space.

In the meantime, Yao passed responsibility for the project to Jian Lin, a co-author of the new paper who joined the Tour and Natelson labs in 2011 as a postdoctoral researcher. Lin built the latest iterations of silicon oxide memories using crossbar graphene electrodes.

"Our lab members are excellent at synthesizing materials and I'm good at fabrication of devices for various applications, so we work together well," said Lin, whose primary interest is in the application of nanomaterials. "This group is a win-win for me."

Labs at other institutions have picked up the thread, carrying out their own experiments on silicon oxide memory. "The switching mechanism has pretty much been investigated," Lin said. "But from engineering or application perspectives, there are a lot of things that can be done."

So here silicon memory stands, a toddler full of promise. Researchers at Rice and elsewhere are working to increase silicon memory's capacity and improve its reliability while electronics manufacturers think hard about how to make it in bulk and put it into products.

Tour realizes impatience for scientific progress is a function of hurried times and not a failure of the process, but he counsels against frustration. "It's a very interesting system that has been slow to develop," he said, "as we've been working to understand the fundamental switching mechanism," a task largely accomplished by Yao and his Rice advisers in a paper published earlier this year. "This is now transitioning slowly into an applied system that could well be taken up as a future memory system.

"It is a good example of basic research," he said. "Now, others have to be able to look forward from the science and say, 'You know, there's a path to a product here.'"

Co-authors of the Nature Communications paper are Rice graduate students Yanhua Dai, Gedeng Ruan, Zheng Yan and Lei Li. Zhong is an associate professor of electrical and computer engineering. Natelson is a professor of physics and astronomy and of electrical and computer engineering.

The research was supported by the David and Lucille Packard Foundation, the Texas Instruments Leadership University Fund, the National Science Foundation and the Army Research Office.

From sciencedaily

Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's first two months on the market as people clamored to play video games with their entire bodies in lieu of handheld controllers. But while Kinect is great for full-body gaming, it isn't useful as an interface for personal computing, in part because its algorithms can't quickly and accurately detect hand and finger movements. 

 Finger mouse: 3Gear uses depth-sensing cameras to track finger movements.


Now a San Francisco-based startup called 3Gear has developed a gesture interface that can track fast-moving fingers. Today the company will release an early version of its software to programmers. The setup requires two 3-D cameras positioned above the user to the right and left. 

The hope is that developers will create useful applications that will expand the reach of 3Gear's hand-tracking algorithms. Eventually, says Robert Wang, who cofounded the company, 3Gear's technology could be used by engineers to craft 3-D objects, by gamers who want precision play, by surgeons who need to manipulate 3-D data during operations, and by anyone who wants a computer to do her bidding with a wave of the finger.

One problem with gestural interfaces—as well as touch-screen desktop displays—is that they can be uncomfortable to use. They sometimes  lead to an ache dubbed "gorilla arm." As a result, Wang says, 3Gear focused on making its gesture interface practical and comfortable. 

"If I want to work at my desk and use gestures, I can't do that all day," he says. "It's not precise, and it's not ergonomic." 

The key, Wang says, is to use two 3-D cameras above the hands. They are currently rigged on a metal frame, but eventually could be clipped onto a monitor. A view from above means that hands can rest on a desk or stay on a keyboard. (While the 3Gear software development kit is free during its public beta, which lasts until November 30, developers must purchase their own hardware, including cameras and frame.)

"Other projects have replaced touch screens with sensors that sit on the desk and point up toward the screen, still requiring the user to reach forward, away from the keyboard," says Daniel Wigdor, professor of computer science at the University of Toronto and author of Brave NUI World, a book about touch and gesture interfaces. "This solution tries to address that."

3Gear isn't alone in its desire to tackle the finer points of gesture tracking. Earlier this year, Microsoft released an update that enabled people who develop Kinect for Windows software to track head position, eyebrow location, and the shape of a mouth. Additionally, Israeli startup Omek, Belgian startup SoftKinetic, and a startup from San Francisco called Leap Motion—which claims its small, single-camera system will track movements to a hundredth of a millimeter—are all jockeying for a position in the fledgling gesture-interface market. 



"Hand tracking is a hard, long-standing problem," says Patrick Baudisch, professor of computer science at the Hasso-Plattner Institute in Potsdam, Germany. He notes that there's a history of using cumbersome gloves or color markers on fingers to achieve this kind of tracking. An interface without these extras is "highly desirable," Baudisch says.

3Gear's system uses two depth cameras (the same type used with Kinect) that capture 30 frames per second. The position of a user's hands and fingers are matched to a database of 30,000 potential hand and finger configurations. The process of identifying and matching to the database—a well-known approach in the gesture-recognition field—occurs within 33 milliseconds, Wang says, so it feels like the computer can see and respond to even a millimeter finger movement almost instantly.

Even with the increasing interest in gesture recognition for hands and fingers, it may take time for non-gamers and non-engineers to widely adopt the technology. 

"In the desktop space and productivity scenario, it's a much more challenging sell," notes Johnny Lee, who previously worked at Microsoft on the Kinect team and now works at Google. "You have to compete with the mouse, keyboard, and touch screen in front of you." Still, Lee says, he is excited to see the sort of applications that will emerge as depth cameras drop in price, algorithms for 3-D sensing continue to improve, and more developers see gestures as a useful way to interact with machines. 

By Kate Greene  
From Technology Review

Colorful creates passively cooled Nvidia graphics card



The GTX680 is Nvidia's powerful single-core graphics card. GeForce refers to a brand of graphics processing units (GPUs) designed by Nvidia. In March it was announced that the first chip based on the Kepler architecture was hitting the market, aboard a new graphics card called the GeForce GTX 680. 

The passively cooled GeForce GTX 680 model uses 20 heatpipes and two aluminum heatsinks. Colorful claims this is the first zero-noise GTX 680 solution.

Colorful is considered one of Nvidia's most important board partners in Asia. Established in 1995, Colorful conducts research, designs, manufactures, and sells consumer graphics cards. Those familiar with Colorful regard it as a company that frequently comes up with surprises. One such description is that Colorful is “an unorthodox producer of Nvidia cards,” according to PC reviews site, HEXUS. A Singapore-based technology site refers to Colorful as making “some of the most outrageous and over-the-top graphics cards you will find.”

Colorful‘s “cooled” solution has 20 heatpipes combined with 280 aluminum fins. In reviewing the announcement, a note of concern was struck over the fact that Colorful has not yet mentioned clock speeds. Geek.com wonders if they might have underclocked the GPU to help keep temperatures to a minimum. “If it hasn’t been underclocked, then it may be a card worth keeping an eye out for,” said the report. The techPowerup site said that the design guarantees reliable silent operation at reference clock speeds or mild overclocking.

There has been no price or release date announced; Colorful is said to be still assessing the marketability of the design.

When Colorful first showed off the iGame card at Computex 2012 in Taipei earlier this month, the product was described as “iGame GeForce GTX 680 Silent” and drew prompt attention as a card that relies completely on passive cooling, not a fan,.

This is not the first time, however, that a manufacturer has achieved a passively cooled graphics card, and more competition is likely to emerge sooner than later, under different partnerships. Sapphire announced in early June that it had come up with its new passively cooled Radeon HD 7770 card. Like the Colorful entry, this does not use a fan but instead dissipates heat via a “heatspreader.” Sapphire partners with AMD.

From phys.org

Magnetic Memory Miniaturized to Just 12 Atoms

The smallest magnetic-memory bit ever made—an aggregation of just 12 iron atoms created by researchers at IBM—shows the ultimate limits of future data-storage systems. 

The magnetic memory elements don't work in the same way that today's hard drives work, and, in theory, they can be much smaller without becoming unstable. Data-storage arrays made from these atomic bits would be about 100 times denser than anything that can be built today. But the 12 atoms making up each bit must be painstakingly assembled using an expensive and complex microscope, and the bits can hold data for only a few hours and at low temperatures approaching absolute zero, so the miniscule memory elements won't be found in consumer devices anytime soon.

 Let's get small: This scanning tunneling microscope image shows a group of 12 iron atoms, the smallest magnetic memory bit ever made.

As the semiconductor industry bumps up against the limits of scaling by making memory and computation devices ever smaller, the IBM Almaden research group, led by Andreas Heinrich, is working from the other end, building computing elements atom-by-atom in the lab. 

The necessary technology for large-scale manufacturing at the single-atom scale doesn't exist yet. Today, says Heinrich, the question is, "What is it you would want to build on the scale of atoms for data storage and computation, in the distant future?"

As engineers miniaturize conventional devices, they're finding that quantum physics, which never had to be accounted for in the past, makes devices less stable. As conventional magnetic memory bits are miniaturized, for example, each bit's magnetic field begins to affect its neighbors', weakening each bit's ability to hold on to a 1 or a 0. 

The IBM researchers found that it was possible to sidestep this problem by using groups of atoms that display a different kind of magnetism. The key, says Heinrich, is the magnetic spin of each individual atom. 

In conventional magnets, whether they're found holding up a note on the refrigerator or in a data-storage array, the magnetic spins of the atoms are aligned. It's this alignment that leads to instability when magnetic-memory elements are miniaturized. The IBM researchers made their tiny memory elements by lining up iron atoms whose spins were counter-aligned.

The researchers both constructed and wrote data to the tiny memory elements using a scanning tunneling microscope, a device developed at IBM Zürich in 1981. This microscope has a very thin conducting probe that can be used to image a surface and push individual atoms around. 

Heinrich says his team found it could make antiferromagnetic memory using fewer than 12 atoms, but these were less stable. With 12 atoms, the memory elements obey classical physics, and the read-and-write pulses applied through the microscope probe are similar to those used in today's hard drives. This research is described today in the journal Science.

Any realistic nonvolatile data storage technology has to be able to hold onto the data for 10 years at temperatures well over room temperature, says Victor Zhirnov, a research scientist at the Semiconductor Research Corporation, who was not involved with the work. The IBM bits can hold onto a 1 or a 0 for just a few hours, and only at very low temperatures, but Heinrich says it should be possible to increase their stability for operation at more realistic temperatures by using 150 atoms per bit rather than 12—still a miniscule number compared to existing forms of memory.

However, making a realistic technology was not the aim of the current work, says Heinrich. His aim is to explore whether other kinds of computing elements can be made from a few atoms, perhaps by embracing quantum. "We have to have the foresight not to worry about the next step, but to jump to something potentially revolutionary," he says. 

By Katherine Bourzac
From Technology Review

Air Force Researchers are Building Simple Quantum Computers Out of Holograms

In a paper far too daunting for a Monday, researchers at the Air Force Research Lab (AFRL) have described a novel way to build a simple quantum computer. The idea: rather than using a bunch of finicky interferometers in series to measure the inputs and outputs of data encoded in photons, they want to freeze their interferometers in glass using holograms, making their properties more stable. 

 Quantum Computing with Holograms Just like that Warner A. Miller, Grigoriy Kreymerman, Christopher Tison, Paul M. Alsing, Jonathan R. McDonald

Quantum computing requires encoding information into a quantum medium, and light is the most obvious choice. Photons don’t have mass and therefore don’t interact much with external forces; things like electrical interference or magnetic fields don’t mess with the quantum state, and photons travel straight through transparent matter (like fiber optic cable or ambient air). But light is also a bit tricky because photons don’t interact with each other well either. Processing information in a photon at the receiving end can be particularly problematic. To make quantum computing work, researchers generally use interferometers, which basically make photons interact in a way that is diagnostic of the state of the photons. That’s a roundabout way of saying, interferometers enable quantum computations by basically being the read and write devices for photons, with the output of one interferometer feeding the input for the next.

But interferometers aren’t easy to work with. They lose their calibration easily, so stringing together a series of interferometers to conduct more complex calculations isn’t easy to do. So the AFRL team had an idea: why not freeze the properties of the interferometers in place by translating them to holograms “frozen” in a piece of tempered glass. That way researchers could stack the holograms to perform simple quantum functions without worrying about them losing their properties. There’s an off-the-shelf commercial product called OptiGrate that is apparently pretty ideal for this kind of holographic freezing.

Of course, there are drawbacks. For one, OptiGrate is one-time write-only, so there’s no reprogramming a quantum setup once the holograms have been frozen in place. They also aren’t scalable, at least for the time being. Simple computations would be all they are capable of.
Even so, there’s a need for reliable quantum computing schemes, even very simple ones, and as yet there’s not real technology that’s stepped into that space, Technology Review tells us. So while this kind of thing is pretty nascent, it could be the beginning of something bigger and better as technologies (like OptiGrate) mature.

From popsci

More Powerful Supercomputers? New Device Could Bring Optical Information Processing

The "passive optical diode" is made from two tiny silicon rings measuring 10 microns in diameter, or about one-tenth the width of a human hair. Unlike other optical diodes, it does not require external assistance to transmit signals and can be readily integrated into computer chips.

 This illustration shows a new "all-silicon passive optical diode," a device small enough to fit millions on a computer chip that could lead to faster, more powerful information processing and supercomputers. The device has been developed by Purdue University researchers.

The diode is capable of "nonreciprocal transmission," meaning it transmits signals in only one direction, making it capable of information processing, said Minghao Qi (pronounced Chee), an associate professor of electrical and computer engineering at Purdue University.

"This one-way transmission is the most fundamental part of a logic circuit, so our diodes open the door to optical information processing," said Qi, working with a team also led by Andrew Weiner, Purdue's Scifres Family Distinguished Professor of Electrical and Computer Engineering.

The diodes are described in a paper to be published online Dec. 22 in the journal Science. The paper was written by graduate students Li Fan, Jian Wang, Leo Varghese, Hao Shen and Ben Niu, research associate Yi Xuan, and Weiner and Qi.

Although fiberoptic cables are instrumental in transmitting large quantities of data across oceans and continents, information processing is slowed and the data are susceptible to cyberattack when optical signals must be translated into electronic signals for use in computers, and vice versa.

"This translation requires expensive equipment," Wang said. "What you'd rather be able to do is plug the fiber directly into computers with no translation needed, and then you get a lot of bandwidth and security."

Electronic diodes constitute critical junctions in transistors and help enable integrated circuits to switch on and off and to process information. The new optical diodes are compatible with industry manufacturing processes for complementary metal-oxide-semiconductors, or CMOS, used to produce computer chips, Fan said.

"These diodes are very compact, and they have other attributes that make them attractive as a potential component for future photonic information processing chips," she said.

The new optical diodes could make for faster and more secure information processing by eliminating the need for this translation. The devices, which are nearly ready for commercialization, also could lead to faster, more powerful supercomputers by using them to connect numerous processors together.

"The major factor limiting supercomputers today is the speed and bandwidth of communication between the individual superchips in the system," Varghese said. "Our optical diode may be a component in optical interconnect systems that could eliminate such a bottleneck."

Infrared light from a laser at telecommunication wavelength goes through an optical fiber and is guided by a microstructure called a waveguide. It then passes sequentially through two silicon rings and undergoes "nonlinear interaction" while inside the tiny rings. Depending on which ring the light enters first, it will either pass in the forward direction or be dissipated in the backward direction, making for one-way transmission. The rings can be tuned by heating them using a "microheater," which changes the wavelengths at which they transmit, making it possible to handle a broad frequency range.

From sciencedaily

IBM Makes Revolutionary Racetrack Memory Using Existing Tools

IBM has shown that a revolutionary new type of computer memory—one that combines the large capacity of traditional hard disks with the speed and robustness of flash memory—can be made with standard chip-making tools. 

 Memory milestone: These nanowires are part of a prototype chip for a novel form of data storage that could fit more information into a smaller space than today’s technology.


The work is important because the cost and complexity of manufacturing fundamentally new computer components can often derail their development.

IBM researchers first described their vision for "racetrack" computer memory in 2008. Today, at the International Electronic Devices Meeting in Washington, D.C., they unveiled the first prototype that combines on one chip all the components racetrack memory needs to read, store, and write data. The chip was fabricated using standard semiconductor manufacturing tools.

Racetrack memory stores data on nanoscale metal wires. Bits of information—digital 1s and 0s—are represented by magnetic stripes in those nanowires, which are created by controlling the magnetic orientation of different parts of the wire. 

Writing data involves inserting a new magnetic stripe into a nanowire by applying current to it; reading data involves moving the stripes along the nanowire past a device able to detect the boundaries between stripes.

Earlier demonstrations of the technology employed nanowires on a silicon wafer in a specialized research machine, with other components of the memory attached separately. "All the circuits were separate from the chip with the nanowires on," says Stuart Parkin, who first conceived of racetrack memory and leads IBM's research on the technology at its research lab in Almaden, California. "Now we've been able to make the first integrated version with everything on one piece of silicon."

The new racetrack prototype was made at IBM's labs in Yorktown, New York, using a manufacturing technique known as CMOS, which is widely used to make processors and various semiconductor components. This proves that it should be feasible to make racetrack memory commercially, says Parkin, although much refinement is still needed. 

The nickel-iron nanowires at the heart of the prototype were made by depositing a complete layer of metal onto an area of the wafer, and then etching away material to leave the nanowires behind. 
The wires are approximately 10 micrometers long, 150 nanometers wide, and 20 nanometers thick. One end of each nanowire is connected to circuits that deliver pulses of electrons with carefully controlled quantum-mechanica­l "spin" to write data into the nanowire as magnetic stripes. The other end of each nanowire has additional layers patterned on top that can read out data by detecting the boundaries between stripes when they move past.

Dafiné Ravelosona, an experimental physicist at the Institute of Fundamental Electronics in Orsay, France, leads a European collaboration working on its own version of racetrack memory. He says IBM's latest results are a crucial step along the road to commercialization for the technology. "It's a nice demonstration that shows it's possible to make this kind of memory using CMOS," he says. 

However, Ravelosona adds that the IBM work doesn't yet demonstrate all of the key components that make racetrack memory desirable. "They have only demonstrated that it is possible to move a single bit in each nanowire," he explains. 

Much of the promise of the technology lies in the potential to store many bits—using many magnetic stripes—in a single tiny nanowire, to achieve very dense data storage. Ravelosona suggests that the material used to make the nanowires in the new IBM device lacks the right magnetic properties to allow that.

Parkin says that the intention wasn't to target density but adds, "We're focusing on exactly this question." His group is currently working on how to fit as many magnetic stripes as possible into a nanowire and has begun experiments that suggest that wires made from a different type of material may do better.

The nickel-iron alloy of the integrated prototype is what's known as a soft magnetic material, because it can be easily magnetized and demagnetized by an external magnetic field. Parkin is also experimenting with hard magnetic materials, which get their magnetic properties from their tightly fixed crystalline structure and as a result are not easily demagnetized.

"Using this different material, we have discovered we can move the domain walls [between magnetic stripes] very fast and that they are much smaller and stronger than in the soft magnetic material used in the integrated devices," says Parkin. 

That means not only that it should be easier to put many stripes into one nanowire, but also that nanowires fabricated with less precision will still work, which should make fabrication easier. "I call this racetrack 2.0," he says.

By Tom Simonite
From Technology Review

Heat from Fingertips Could Help ATM Hackers

The secret codes typed in by banking customers can be recorded using the residual heat left behind on the keypad, says a group of researchers from the University of California at San Diego.

The group's paper, presented earlier this month at the USENIX Workshop on Offensive Technologies, shows that a digital infrared camera can read the digits of a customer's PIN number on the keypad more than 80 percent of the time if used immediately. And if the camera is used a minute later, says Keaton Mowery, a doctoral student in computer science at UCSD, it can still detect the correct digits about half the time.

 Hot hacker: A typical ATM keypad is shown at top. Below is a thermal image taken immediately after it's been used. The code in this case was 1485.

The research, which Mowery conducted with fellow student Sarah Meiklejohn and professor Stefan Savage, is based on previous work by well-known security researcher Michal Zalewski, who in 2005 used an infrared camera to detect codes punched into a safe with a keypad lock. While Zalewski was able to detect the codes even after five minutes, the UCSD researchers found that the chance of extracting the proper digits dropped to about 20 percent after 90 seconds.

The infrared method can circumvent defensive strategies, such as shielding the keypad. However, an ATM user could evade this infrared surveillance merely by placing a hand over the entire keypad to warm all of the keys, says Mowery. And if an ATM also uses the keypads for entering other numbers, such as the amount of money to withdraw, it contributes additional noise, says Meiklejohn.

The method has other weaknesses as well. "With plastic keypads, we can reliably detect which buttons were pressed, but it is really difficult to determine the order," Mowery says. Even if the image was recorded immediately after the user typed it in, the order of the digits was only detectable about 20 percent of the time. 

And if the keypad is metal, fuhgeddaboudit. "Essentially, if you pointed the camera directly at the metal keypad, it would show you the thermal fingerprint of you, the camera operator, rather than of the keypad itself," Meiklejohn says. "However, we didn't push it, because the plastic keypad did work. It's possible that someone else could solve those issues."

Combine all of these shortcomings with the cost of the infrared camera—$2,000 a month to rent, about $18,000 to buy—and the likelihood of anyone attacking an ATM this way is low, says researcher Zalewski. "Miniature daylight cameras are a lot simpler and more reliable," he says. 

By Robert Lemos
From Technology Review

Quantum Processor Hooks Up with Quantum Memory

Researchers at the University of California, Santa Barbara, have become the first to combine a quantum processor with memory that can be used to store instructions and data. This achievement in quantum computing replicates a similar milestone in conventional computer design from the 1940s.

 Super cool: When chilled almost to absolute zero, this chip becomes a quantum computer that includes both a processor (the two black squares) and memory (the snaking lines on either side).

Although quantum computing is now mostly a research subject, it holds out the promise of computers far more capable than those we use today. The power of quantum computers comes from their version of the most basic unit of computing, the bit. In a conventional computer, a bit can represent either 1 or 0 at any time. Thanks to the quirks of quantum mechanics, the equivalent in a quantum computer, a qubit, can represent both values at once. When qubits in such a "superposition" state work together, they can operate on exponentially more data than the same number of regular bits. As a result, quantum computers should be able to defeat encryption that is unbreakable in practice today and perform highly complex simulations.

Linking a processor and memory elements brings such applications closer, because it should make it more practical to control and program a quantum computer can perform, says Matteo Mariantoni, who led the project, which is part of a wider program at UCSB headed by John Martinis and Andrew Cleland.

The design the researchers adopted is known as the von Neumann architecture—named after John von Neumann, who pioneered the idea of making computers that combine processor and memory. Before the first von Neumann designs were built in the late 1940s, computers could be reprogrammed only by physically reconfiguring them. "Every single computer we use in our everyday lives is based on the von Neumann architecture, and we have created the quantum mechanical equivalent," says Mariantoni.
The only quantum computing system available to buy—priced at $10 million—lacks memory and works like a pre-von Neumann computer.

Qubits can be made in a variety of ways, such as suspending ions or atoms in magnetic fields. The UCSB group used more conventional electrical circuits, albeit ones that must be cooled almost to absolute zero to make them superconducting and activate their quantum behavior.  They can be fabricated by chip-making techniques used for conventional computers. Mariantoni says that using superconducting circuits allowed the team to place the qubits and memory elements close together on a single chip, which made possible the new von Neumann-inspired design.

The processor consists of two qubits linked by a quantum bus that enables them to communicate. Each is also connected to a memory element into which the qubit can save its current value for later use, serving the function of the RAM - for random access memory - of a conventional computer. The links between the qubits and the memory contain devices known as resonators, zigzagging circuits inside which a qubit's value can live on for a short time.

Mariantoni's group has used the new system to run an algorithm that is a kind of computational building block, called a Toffoli gate, which can be used to implement any conventional computer program. The team also used its design to perform a mathematical operation that underlies to the algorithm with which a quantum computer might crack complex data encryption.

David Schuster leads a group at the University of Chicago that also works on quantum computing, including superconducting circuits. He says that superconducting circuits have recently proved to be comparatively reliable. "One of the next big frontiers for these techniques now is scale," he says. By replicating the Von Neumann architecture the UCSB team have expanded that frontier.

That's not to say that quantum computers must all adopt that design, though, as conventional computers have. "You could make a computer completely out of qubits and it could do every kind of calculation," says Schuster. However there are advantages to making use of resonators like those that make up the new design's memory, he says. "Resonators are easier and more reliable to make than qubits and easier to control," says Schuster.

Mariantoni agrees. "We can easily scale the number of these unit cells," he says. "I believe that arrays of resonators will represent the future of quantum computing with integrated circuits."

By Tom Simonite
From Technology Review

Stick-On Electronic Tattoos

Researchers have made stretchable, ultrathin electronics that cling to skin like a temporary tattoo and can measure electrical activity from the body. These electronic tattoos could allow doctors to diagnose and monitor conditions like heart arrhythmia or sleep disorders noninvasively.

 Pinch me: These microelectronics are able to wrinkle, bend, and twist along with skin, even as it is being pinched, without breaking or coming loose.

John A. Rogers, a professor of materials science at the University of Illinois at Urbana-Champaign, has developed a prototype that can replicate the monitoring abilities of bulky electrocardiograms and other medical devices that are normally restricted to a clinical or laboratory setting. This work was presented today in Science.

To achieve flexible, stretchable electronics, Rogers employed a principle he had already used to achieve flexibility in substrates. He made the components—all composed of traditional, high-performance materials like silicon—not only incredibly thin, but also "structured into a serpentine shape" that allows them to deform without breaking. The result, says Rogers, is that "the whole system takes on this kind of spiderweb layout."
In the past, says Rogers, he was able to create devices that were either flexible but not stretchable, or stretchable but not flexible. In particular, his previous work was limited by the fact that the electronics portions of his designs couldn't flex and stretch as much as the substrate they were mounted on.

The electronic tattoo achieves the mechanical properties of skin, which can stand up to twisting, poking, and pulling without breaking. Rogers's tattoo can also conform to the topography of the skin as well as stretch and shift with it. It can be worn for extended periods without producing the irritation that often results from adhesive tapes and rigid electronics. Although Rogers's preliminary tests involved a custom-made substrate, he also demonstrated that the electronics could be mounted onto a commercially available temporary tattoo. 

The prototype was equipped with electrodes to measure electric signals produced by muscle and brain activity. This could be useful for noninvasive diagnosis of sleep apnea or monitoring of premature babies' heart activity. It also might be possible, Rogers says, to use the tattoos to stimulate the muscles of physical rehabilitation patients, although this use wasn't demonstrated in the paper.

To demonstrate the device's potential as a human-computer interface, Rogers mounted one of the tattoos on a person's throat and used measurements of the electrical activity in the throat muscles to control a computer game. The signal from the device contained enough information for software to distinguish among the spoken words "left," "right," "up," and "down" to control a cursor on the screen.

The device included sensors for temperature, strain, and electric signals from the body. It also housed LEDs to provide visual feedback; photodetectors to measure light exposure; and tiny radio transmitters and receivers. The device is small enough that it requires only minuscule amounts of power, which it can harvest via tiny solar cells and via a wireless coil that receives energy from a nearby transmitter. Rogers hopes to build in some sort of energy-storage ability, like a tiny battery, in the near future. The researchers are also working on making the device wireless.

Ultimately, Rogers says, "we want to have a much more intimate integration" with the body, beyond simply mounting something very closely to the skin. He hopes that his devices will eventually be able to use chemical information from the skin in addition to electrical information.

By Kenrick Vezina
From Technology Review

A Guiding Light for Silicon Photonics

A new way of controlling the path that light takes as it passes through silicon could help overcome one of the big obstacles to making an optical, rather than electronic, computer circuit. Researchers at Caltech and the University of California, San Diego, have taken a step toward a device that prevents light signals from reflecting back and causing errors in optical circuits.

 Light bouncer: Light entering a metallic silicon waveguide from the left flows freely (top). Light entering from the right has its path disrupted.

Chips that compute with light instead of electrons promise to be not only faster, but also less expensive and more energy-efficient than their conventional counterparts. But to be made economically, many believe, photonic chips must be made from silicon, using equipment already being used to build electronic microchips.
Researchers have made many of the necessary elements for a silicon photonic circuit already, including superfast modulators for encoding information onto beams of light, and detectors to read these beams. 
 
But the way light travels through silicon remains a big problem. Light doesn't just go in one direction—it bounces around and even reflects backward, which is disastrous in a circuit. If an optical device were designed to receive two inputs and a third input reflected back in, that would cause an error. As a circuit became more complex, error-causing reflections would overwhelm it.

The Caltech and UCSD researchers have developed a silicon waveguide that causes light to behave differently depending on the direction it's traveling. The researchers, led by Caltech electrical engineering professor Axel Scherer, created a waveguide out of a long, narrow strip of silicon about 800 nanometers wide, with metal spots along the sides like bumpers. Light travels freely in one direction down the waveguide, but is bent as it travels in the opposite direction.

"This is an important breakthrough in a field where we really need a few," says Marin Soljačić, a physics professor at MIT. Soljačić was not involved with the work. The lack of this kind of component, he says, has been "the single biggest obstacle to the large-scale integration of optics at a similar scale to electronics."

Physicists have been wrestling with the unruly behavior of light in silicon for a long time. The new design is the result of years of theoretical work by the California researchers, as well as Soljačić, Shanhui Fan at Stanford University, and others. Previously, researchers had only been able to get light to behave this way in magnetic materials that cannot be incorporated into silicon circuitry, says Michelle Povinelli, assistant professor of electrical engineering at the University of Southern California. 

Soljačić says the new waveguide is particularly significant because it was fabricated using methods used by the semiconductor industry. "This is a very important step toward large-scale optics integration," he says.
Caltech researcher Liang Feng says the team is now working on engineering a full isolator—a component that only lets light travel in one direction, instead of just bending it as it tries to travel the wrong way. He says the current work "is just the first step." 

"Now it's about engineering around this fundamental discovery," says Keren Bergman, professor of electrical engineering at Columbia University. Bergman was not involved with the work. 

Even after that engineering is finished, Bergman says, there's a big looming problem for silicon photonics: there's no good way to make the light sources that are needed for silicon optical processors. Soljačić adds that a full optical computer will also need optical memory, which hasn't been made, either. However, the current work overcomes the "biggest uncertainty" that had been troubling engineers, he says. "Now, with this work, I'm feeling much better."

By Katherine Bourzac
From Technology Review

A Futures Market for Computer Security

Information security researchers from academia, industry, and the U.S. intelligence community are collaborating to build a pilot "prediction market" capable of anticipating major information security events before they occur.


A prediction market is similar to a regular stock exchange, except the "stocks" are simple statements that the exchange's members are encouraged to evaluate. Traders will buy and sell "shares" of a stock based on the strength of their confidence about the future outcome—with an overall goal of increasing the value of their portfolios, which will in turn earn them some sort of financial reward. Traders may choose to buy or sell additional shares of a stock, and that buying and selling activity pushes the stock price up or down, just as in a real market.

Some of the stocks being considering cover a few months, such as: "The volume of spam e-mail will increase by 10 percent in the third quarter of 2011." Others will ask participants to gauge the likelihood of far-off events, such as the chance that the U.S. House of Representatives will pass a bill with "cyber" and "security" in its title in the first session of the 112th Congress, or whether broadly used encryption algorithms will be defeated within the next 24 months.

Greg Shannon, chief scientist of the CERT program at Carnegie Mellon's Software Engineering Institute, who is involved with the project, says the purpose is to provide actionable data.

"If you're Verizon, and you're trying to pre-position resources, you might want to have some visibility over the horizon about the projected prevalence of mobile malware," Shannon said. "That's something they'd like to have an informed opinion about by leveraging the wisdom of the security community."

Predictions markets have effectively forecasted all manner of events and trends, from the success of sports teams to the sales of new products. The pilot project will rely on software and services provided by Consensus Point, a Nashville-based company that has helped to build employee-driven prediction markets for several major companies, including General Electric, Best Buy, and Qualcomm. Best Buy's prediction market—called "TagTrade"—is designed to give management an early indicator of which new products or ideas are likely to succeed, and whether specific new stores will open on time. The University of Iowa's Iowa Electronic Markets, one of the earliest prediction markets, has significantly outperformed the polls in every presidential election when forecasting more than 100 days in advance: Compared to 964 polls over the five presidential elections since 1988, the Iowa market was closer to the eventual outcome 74 percent of the time. The University of Iowa also uses prediction markets to forecast seasonal flu outbreaks.

Prediction markets have a major built-in bias—those answering the questions are not polled randomly—but respondents also have an incentive to respond only to those questions they feel confident in answering with accuracy.

"Prediction markets aren't just surveys that ask everyone to speak up," Robin Hanson, chief scientist at Consensus Point. " People tend to speak up only when they're reasonably sure they know the answer."

Consensus Point CEO Linda Rebrovick says the goal of the project is to attract a network of about 250 experts, although the organizers are still deciding how to compensate for correct answers.

"There will be some combination of rewards and financial incentives for participating," Rebrovick says.

Even if questions generate only tepid responses, such responses can be informative, says Dan Geer, chief information security officer at In-Q-Tel, the venture capital arm of the Central Intelligence Agency (CIA). Geer is also involved in the project. "It may be that this tells us there is ambiguity, or that we are, in effect, measuring disagreement on a question that doesn't have a quantitative aspect to it," Geer says. "Straight-out surveys are vulnerable to idiot answers, and prediction markets are vulnerable to stupid questions."

While the pilot project will be limited to invited information security experts, the consensus decisions reached by the group will be made public. "Even if we can't find something useful in all of this, we feel that's a valuable result. It's the way you make progress," Geer says.

By Brian Krebs
From Technology Review

A Smarter, Stealthier Botnet

A new kind of botnet—a network of malware-infected PCs—behaves less like an army and more like a decentralized terrorist network, experts say. It can survive decapitation strikes, evade conventional defenses, and even wipe out competing criminal networks.



The botnet's resilience is due to a super-sophisticated piece of malicious software known as TDL-4, which in the first three months of 2011 infected more than 4.5 million computers around the world, about a third of them in the United States.

The emergence of TDL-4 shows that the business of installing malicious code on PCs is thriving. Such code is used to conduct spam campaigns and various forms of theft and fraud, such as siphoning off passwords and other sensitive data. It's also been used in the billion-dollar epidemic of fake anti-virus scams.

"Ultimately TDL-4 is simply a tool for maintaining and protecting a compromised platform for fraud," says Eric Howes, malware analyst for GFI Software, a security company. "It's part of the black service economy for malware, which has matured considerably over the past five years and which really needs a lot more light shed on it."

Unlike other botnets, the TDL-4 network doesn't rely on a few central "command-and-control" servers to pass along instructions and updates to all the infected computers. Instead, computers infected with TDL-4 pass along instructions to one another using public peer-to-peer networks. This makes it a "decentralized, server-less botnet," wrote Sergey Golovanov, a malware researcher at the Moscow-based security company Kaspersky Lab, on this blog describing the new threat.

"The owners of TDL are essentially trying to create an 'indestructible' botnet that is protected against attacks, competitors, and antivirus companies," Golovanov wrote. He added that it "is one of the most technologically sophisticated, and most complex-to-analyze malware."

The TDL-4 botnet also breaks new ground by using an encryption algorithm that hides its communications from traffic-analysis tools. This is an apparent response to efforts by researchers to discover infected machines and disable botnets by monitoring their communication patterns, rather than simply identifying the presence of the malicious code.

Demonstrating that there is no honor among malicious software writers, TDL-4 scans for and deletes 20 of the most common forms of competing malware, so it can keep infected machines all to itself. "It's interesting to mention that the features are generally oriented toward achieving perfect stealth, resilience, and getting rid of 'competitor' malware," says Costin Raiu, another malware researcher at Kaspersky.

Distributed by criminal freelancers called affiliates, who get paid between $20 and $200 for every 1,000 infected machines, TDL-4 lurks on porn sites and some video and file-storage services, among other places, where it can be automatically installed using vulnerabilities in a victim's browser or operating system.

Once TDL-4 infects a computer, it downloads and installs as many as 30 pieces of other malicious software—including spam-sending bots and password-stealing programs. "There are other malware-writing groups out there, but the gang behind [this one] is specifically targeted on delivering high-tech malware for profit," says Raiu.

By David Talbot
From Technology Review

A Practical Way to Make Invisibility Cloaks

A new printing method makes it possible to produce large sheets of metamaterials, a new class of materials designed to interact with light in ways no natural materials can. For several years, researchers working on these materials have promised invisibility cloaks, ultrahigh-resolution "superlenses," and other exotic optical devices straight from the pages of science fiction. But the materials were confined to small lab demonstrations because there was no way to make them in large enough quantities to demonstrate a practical device.

 Light warp: This is the largest sheet ever made of a metamaterial that can bend near-infrared light backwards.

"Everyone has, perhaps conveniently, been in the position of not being able to make enough [metamaterial] to do anything with it," says John Rogers, a professor of materials science and engineering at the University of Illinois at Urbana-Champaign, who developed the new printing method. Metamaterials that interact with visible light have previously not been made in pieces larger than hundreds of micrometers. 

Metamaterials are made up of intricately patterned layers, often of metals. The patterns must be on the same scale as the wavelength of the light they're designed to interact with. In the case of visible and near-infrared light, this means features on the nanoscale. Researchers have been making these materials with such time-consuming methods as electron-beam lithography.

 Light mesh: The large-area metamaterial is made up of a layered mesh of metals patterned on the nanoscale.


Rogers has developed a stamp-based printing method for generating large pieces of one of the most promising types of metamaterial, which can make near-infrared light bend the "wrong" way when it passes through. Materials with this so-called negative index of refraction are particularly promising for making superlenses, night-vision invisibility cloaks, and sophisticated waveguides for telecommunications. The Illinois group starts by molding a hard plastic stamp that's covered with a raised fishnet pattern. The stamp is then placed in an evaporation chamber and coated with a sacrificial layer, followed by alternating layers of the metamaterial ingredients—silver and magnesium fluoride—to form a layered mesh on the stamp. The stamp is then placed on a sheet of glass or flexible plastic and the sacrificial layer is etched away, transferring the patterned metal to the surface. So far Rogers says he's made metamaterial sheets a few inches per side, but by using more than one stamp he expects to increase that to square feet. And, he says, the stamped materials actually have better optical properties than metamaterials made using traditional methods.

"We can now bang out gigantic sheets of this stuff," Rogers says. Making the mold for the stamp takes care, but once that mold has been created, it doesn't take long to make many reusable stamps.

Xiang Zhang, chair of mechanical engineering at the University of California, Berkeley, says this work represents an important step toward applications for optical metamaterials. "Various metamaterials could be made bigger by this method," says Zhang, who in 2008 created the design that Rogers used for this first demonstration. "For example, macroscale 2-D lenses and cloaks may be possible, and possibly solar concentrators, too." One potential application is in lenses that integrate multiple functions in single devices, for telecommunications and imaging. 

"This printing technique is quite powerful and has the potential to scale to very large areas," says Nicholas Fang, an associate professor of mechanical engineering at MIT. Fang says this type of metamaterial would be particularly interesting for infrared  imaging devices. 

By Katherine Bourzac
From Technology Review

Researchers Crack Audio Security System

A team of computer scientists with expertise in artificial intelligence, audio processing, and computer security has come up with a way to automatically defeat the systems that prevent spammers from creating new accounts on sites like Yahoo, Microsoft's Hotmail, and Twitter. 



Many websites require users to correctly transcribe a string of distorted characters—a puzzle known as a CAPTCHA—to gain access. These tests are relatively easy for people, but very hard for computers. Most sites also make CAPTCHAs available in audio form, for vision-impaired users, and the researchers found that their algorithm could solve many of these audio CAPTCHAs. Researchers at Stanford University have demonstrated the vulnerability of audio CAPTCHAs before, in 2008, but the new work targets newer, more secure versions.

The ability to automatically defeat CAPTCHAs could make it cheaper for spammers to churn out spam. Right now, spammers pay humans sweatshop wages to solve CAPTCHAs, but this can cost up to one cent apiece.
Team leader Elie Bursztein, of Stanford University, says the team's algorithm, called deCAPTCHA, was able to defeat audio CAPTCHAs from Microsoft and Yahoo in almost half of all cases. Microsoft has since switched to another type of CAPTCHA, which the algorithm is still able to defeat in 1.5 percent of cases.

"[In defeating security measures,] if you cross the 1 percent threshold, you are in a lot of trouble," says Burzstein. "It's almost a free pass."

Luis Von Ahn, who invented the CAPTCHA, says that, in reality, companies can control the rate at which audio CAPTCHAS are compromised by limiting the number of them that can be solved per day, or by limiting the number that can be solved by a single IP address. But, says independent security expert Markus Jakobsson, "it's very important to understand how we can break things before the bad guys do."

An audio CAPTCHA reads aloud a string of letters or numbers with added audio distortion. The Stanford team created a learning algorithm to "process the sound in a way that was as close as possible to the way that we think the human ear is made," says Bursztein. This meant focusing on lower-frequency sounds, which humans are especially good at processing, and eliminating as much of the noise from audio CAPTCHAs as possible.

Bursztein's team is also working to crack several new types of audio CAPTCHA. One type plays two voices reading different strings of letters or words at the same time. Humans are especially good at picking out one voice when surrounded by many competing conversations in a crowded room, but computers are terrible at this task. A second type combines words with music. 

Even if many existing CAPTCHAs are vulnerable to attack, says Jakobsson, their failure isn't as severe as the compromise of a password system. "[CAPTCHA defeat] is a gradual decay of security. You don't have to keep everybody out to feel like you have security—some failure is tolerable."

By Christopher Mims  
From Technology Review

Tapping Quantum Effects for Software that Learns

In a bid to enable computers to learn faster, defense company Lockheed Martin has bought a system that uses quantum mechanics to process digital data. It paid $10 million to startup D-Wave Systems for the computer and support using it. D-Wave claims this to be the first ever sale of a quantum computing system.

The new system, called the D-Wave One, is not significantly more capable than a conventional computer. But it could be a step on the road to fuller implementations of quantum computing, which theoreticians have shown could easily solve problems that are impossible for other computers, such as defeating encryption systems by solving mathematical problems at incredible speed.

 Quantum calculation: At the center of this image, a series of prototype chips designed to use quantum mechanical effects to work with data.

In a throwback to the days when computers were the size of rooms, the system bought by Lockheed, called the D-Wave One, occupies 100 square feet. Rather than acting as a stand-alone computer, it operates as a specialized helper to a conventional computer running software that learns from past data and makes predictions about future events. The defense company says it intends to use the new purchase to aid identification of bugs in products that are complex combinations of software and hardware. The goal is to reduce cost overruns caused by unforeseen technical problems with such systems, Lockheed spokesperson Thad Madden says. Such challenges were partly behind the recent news that the company's F-35 strike fighter is more than 20 percent over budget.

At the heart of the D-Wave One is a processor made up of 128 qubits—short for quantum bits—which use magnetic fields to represent a single 1 or 0 of digital data at any time and can also exploit quantum mechanics to attain a state of "superposition" that represents both at once. When qubits in superposition states work together, they can work with exponentially more data than the equivalent number of regular bits. Those qubits take the form of metal loops rich in niobium, a material that becomes a superconductor at very low temperatures and is more commonly used as the magnets inside MRI scanners. The qubits are linked by structures called couplers, also made from superconducting niobium alloy, which can control the extent to which adjacent magnetic fields, representing qubits, affect one another. Performing a calculation involves using magnetic fields to set the states of qubits and couplers, waiting a short time, and then reading out the final values from the qubits.

D-Wave's machine is intended to do one thing better than a conventional computer: finding approximate answers to problems that can only be truly solved by exhaustively trying every possible solution. D-Wave runs a single algorithm, dubbed quantum annealing, which is hard-wired into the machine's physical design, says Geordie Rose, D-Wave's founder and CTO. Data sent to the chip is translated into qubit values and settings for the couplers that connect them. After that, the interlinked qubits go through a series of quantum mechanical changes that result in the solution emerging. "You stuff the problem into the hardware and it acts as a physical proxy for what you're trying to solve," says Rose. "All physical systems want to sink to the lowest energy level, with the most entropy," he explains, "and ours sinks to a state that represents the solution."

"You stuff the problem into the hardware and it acts as a physical proxy for what you're trying to solve," says Rose. 

Although exotic, this hardware is intended to be used by software engineers who know nothing of quantum mechanics. A set of straightforward protocols—dubbed APIs for application programming interface—make it easy to push data to the D-Wave system in a standard format.

"You send in your problem and then get back a much more accurate result than you would on a conventional computer," says Rose. He says tests have shown software using the D-Wave system can learn things like how to recognize particular objects in photos up to 9 percent more accurately than a conventional alternative. Rose predicts that the gap will rapidly widen as programmers learn to optimize their code for the way D-Wave's technology behaves.

Google has been experimenting with D-Wave's technology for several years as a way to speed up software that can interpret photos. The company's software engineers use it as a kind of cloud service, accessing a system at D-Wave's Vancouver headquarters over the Internet. In 2009, the company published papers showing that using the quantum system outperformed conventional software running in a Google data center.

Allan Snavelly at San Diego Supercomputer Center has used conventional versions of the algorithms like those that are built into D-Wave's system. He says that the kind of "needle in a haystack" problems they are designed for are important in computer science. "These are problems where you know the right answer when you see it, but finding it among all the exponential space of possibilities is difficult," he says. Being able to experiment with the new system using conventional software tools will be tempting to programmers, says Snavelly. "It's intriguing to consider the possibilities—I would like to get my hands on one."

D-Wave's technology has been dogged by controversy during the 12 years it has been in development, with quantum computing researchers questioning whether the company's technology truly is exploiting quantum effects. A paper published in the science journal Nature on May 12 went some way to addressing those concerns, reporting that the behavior of one of the eight-qubit tiles that make up the D-Wave One is better explained by a mathematical model assuming quantum effects at work than by one assuming only classical physics was involved. 

However, the experiment did not show the results of running a computation on the hardware, leaving doubt in the minds of many quantum computing experts. Rose says the technology definitely uses quantum effects, but that to programmers only one thing really matters. "Compared to the conventional ways, you get a piece of software that is much better."

By Tom Simonite
From Technology Review

Software Transforms Photos Into 3-D Models

Ever wished you could take an object in a museum home with you instead of settling for some photos?
The design software company Autodesk will release free software next week that could turn those snapshots into your own personal replica from a 3-D printer. Called Photofly, the software extracts a detailed 3-D model from a collection of overlapping photos.

 Body double: Photofly can build a detailed 3-D model of a person’s head using just 40 photos taken from different viewpoints.

"We can automatically generate a 3-D mesh at extreme detail from a set of photos—we're talking the kind of density captured by a laser scanner," says Brian Mathews, who leads a group at the company known as Autodesk Labs. Unlike a laser scanner, though, the equipment needed to capture the 3-D rendering doesn't cost tens or hundreds of thousands of dollars. An overlapping set of around 40 photos is enough to capture a person's head and shoulders in detailed 3-D, he says.

The software, which will be available for Windows computers only, uploads a user's photos to a cloud server for processing and then downloads the results. The 3-D rendering can be viewed as a naked wire-frame model of the captured scene or a version with realistic surface color and texture. The colored models can also be shared for viewing in an iPad app, while the underlying wire frame can be exported in standard 3-D design formats for editing.

Models produced from a well-taken set of photos will be spatially accurate to within 1 percent or less, says Mathews, high enough quality to be used for professional design projects. "You could send that model from your photos to a 3-D printing service to physically re-create what you saw, perhaps at a different scale," says Mathews. In recent years, the cost of 3-D printers and printing services has fallen, with hobbyist machines like the MakerBot and consumer services such as ShapeWays that will print out 3-D models in a variety of ceramics, plastics, and metals. Autodesk's is the first consumer software capable of producing models accurate enough for 3-D printing, says Mathews. Similar projects, such as Microsoft Research's PhotoSynth, and an app based on the same technology that enables a cell phone to convert its photos into 3-D models, only capture 3-D data good enough to add an extra dimension to the content of photos, says Mathews. The same was true of a previous version of Photofly. "Generating accurate geometry from what we see in the photos is far more exciting."

Photofly runs through several steps to distill an accurate model from a collection of photos. First, it calculates the position from which each photo was taken by triangulating based on the different views of certain distinctive features. Once the camera positions have been determined, the software goes through a second round of more detailed triangulation, using contrasting views to generate a detailed 3-D surface for everything visible.

"This technology and the popularity of cameras and cell phones means there are now a couple billion sensors out there that anyone can use to create 3-D content," says Yuan-Fang Wang, a computer scientist at the University of California, Santa Barbara, and founder of VisualSize, which is working on technology similar to Autodesk's.

Wang says the technology has become robust and simple enough for the consumer market, but there are still limitations that may frustrate some people. "An object cannot be too plain, because the software has nothing to compare, or too shiny, and it cannot be moving much," he says. Because few ordinary users have experienced the technology yet, it is still unclear how people will handle that, or just which applications will prove popular, Wang adds.

Photofly can be used on objects large and small, from bugs to buildings, and can also handle photos from different sources. A video shows a model of Mount Rushmore created from a variety of online images taken by many different people.

After seeing a demo of the technology at the TED conference earlier this year, paleontologist Louise Leakey has been using Photofly in Kenya to capture early human bones at high detail. The models provide her team with a way to collaborate with distant colleagues and to record accurate measurements of specimens, such as the spacing and size of teeth, without actually handling them (see a video of a specimen captured by Leakey).
Autodesk will also explore using Photofly to capture 3-D models of buildings to speed retrofits designed to boost their energy efficiency. "You can take a bunch of photos and very quickly have a model to make the key measurements needed to figure out what needs to be done to make a building greener," says Mathews.

By Tom Simonite
From Technology Review