Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........

SIM-sized satellites to lift off with Endeavour this afternoon

They won't be beaming GPS or radio signals back to Earth anytime soon, but these one-inch-square satellites could one day travel to distant planets -- without fuel. Developed over a period of three years by a team of undergraduates at Cornell University, the Sprite chips could eventually be used for communication, flying in clusters like tiny space plankton. After hitching a ride on-board the final space shuttle Endeavour mission this afternoon, the three prototype satellites will be mounted outside the International Space Station, where they'll sit for the next few years, exposed to conditions found only beyond our atmosphere. Perhaps someday we'll even see some "Spprite" KIRFs by the time China's own space station is ready to hit the launchpad in 2020. 

By Zach Honig
From engadget 

A single gene may have shaped human cerebral cortex

The size and shape of the human cerebral cortex - responsible for all conscious thought - is largely determined by mutations in a single gene. The findings, from the Yale School of Medicine and two other universities, are based on a genetic analysis of in one Turkish family and two Pakistani families whose children were born with the most severe form of microcephaly. 

The childrens' brains are just 10 percent the normal size, and lack the normal human cortical architecture. And the researchers found that the deformity was caused by mutations in the same gene, centrosomal NDE1, which is involved in cell division.

They say that the combination of undersized brain and simplified architecture has not been observed linked to any other gene associated with brain development. It implies that mutations in this gene were responsible for the development of a complex cerebral cortex in humans.

"The degree of reduction in the size of the cerebral cortex and the effects on brain morphology suggest this gene plays a key role in the evolution of the human brain," said professor Murat Gunel, co-senior author of the paper.

"These findings demonstrate how single molecules have influenced the expansion of the human cerebral cortex in the last five million years. We are now a little closer to understanding just how this miracle happens."

By Kate Taylor 
From tgdaily

Herding Swarms of Microrobots

Imagine a swarm of microrobots—tiny devices a few hair widths across—swimming through your blood vessels and repairing damage, or zipping around in computer chips as a security lock, or quickly knitting together heart tissue. Researchers at the University of California, Berkeley, Dartmouth College, and Duke University have shown how to use a single electrical signal to command a group of microrobots to self-assemble into  larger structures. The researchers hope to use this method to build biological tissues. But for microrobots to do anything like that, researchers must first figure out a good way to control them. 

 Tiny robots: This wafer holds many individual microrobots. Each robot consists of a body (about 100 micrometers long) and an arm that it uses to turn. Several of these robots can be controlled at once.

"When things are very small, they tend to stick together," says Jason Gorman, a robotics researcher in the Intelligent Systems Division at NIST who co-organizes an annual microrobotics competition that draws groups from around the world. "A lot of the locomotion methods that have been developed are focused on overcoming or leveraging this adhesion."

So far, most control methods have involved pushing and pulling the tiny machines with magnetic fields. This approach has enabled them to zoom around on the face of a dime, pushing tiny objects or swim through blood vessels. However, these systems generally require complex setups of coils to generate the electromagnetic field or specialized components, and getting the robots to carry out a task can be difficult. 

Bruce Donald, a professor of computer science and biochemistry at Duke, took a different approach, developing a microrobot that responds to electrostatic potential and is powered with voltage through an electric-array surface. Now he and others have demonstrated that they are able to control a group of these microrobots to create large shapes. They do this by tweaking the design of each robot a little so that each one responds to portions of the voltage with a different action, resulting in complex behaviors by the swarm.

"A good analogy is that we have multiple, remote-controlled cars but only one transmitter," says Igor Paprotny, a post doctorate scientist at UC Berkeley and one of the lead researchers on this work, which he presented last week at a talk at Harvard University. During his talk, he passed around a container holding a wafer die the size of a thumbnail. On it were more than 100 microrobots. 

"What we do is slightly change how the wheels turn," he says. "Simple devices with a fairly simple behavior can be engineered to behave slightly different when you apply a global control signal. That allows a very complex set of behaviors." The robots contain an actuator called a scratch drive, which bends in response to voltage supplied through the electric array. When it releases tension, it goes forward, in a movement similar to an inchworm's. But the key to the robots' varying behavior is the arms extending from the actuators. A steering arm on a microrobot snaps down in response to a certain amount of voltage, dragging on the surface and causing the robot to turn. By snapping the arm up and down one or two times a second, the team can control how much a given robot turns. To control a swarm, the team designed each robot with an arm that reacts differently during portions of the voltage signal. Computer algorithms vary the voltage sequence, prompting the robots to move in complex ways.

"Electrostatic robots have an advantage in that their power is supplied through an electrode array that the microrobots sit on," says Gorman. "It can be very compact. Therefore, electrostatic microrobots can be embedded inside other things [like computer chips]. For magnetic robots, you have to supply electromagnetic field, and that requires a larger set-up." Others have worked on electrostatic microrobots, he adds, but this work is the furthest along. 

"His research is very advanced in terms of controlling multiple microrobots," says Zoltan Nagy, a roboticist at ETH Zürich who works with groups of magnetically controlled robots called Magmites.

"Most of the work to date has been on controlling a single robot that can move around in a pre-defined area on a substrate," adds Gorman. "However, many of the applications of interest will require control of lots of robots, like a colony of ants."  

So far, Paprotny has been able to control up to four robots on a single surface at once, and the robots can move several thousand times their body length per second, as detailed in a paper that is currently submitted for review. His next plan is to adapt the setup for a liquid environment so that the microrobots can assemble components of biological tissue into patterns that mimic nature.

"We're trying to come up with ways of self-assembling tissue units," says Ali Khademhosseini, an associate professor at Brigham and Women's Hospital at Harvard Medical School and a specialist in tissue engineering who is collaborating with Paprotny. "In the body, tissues are made in a hierarchical way—units repeat themselves over and over to generate larger tissue structures." Muscle tissue, for example, is made from small fibers, while liver tissue has a repeating hexagonal shape.

Khademhosseini has encased cells in jelly-like hydrogels and assembled them (using methods that include liquid-air interactions and surface tensions) into different regions to mimic biological tissues. But he thinks the self-assembling microrobots will allow more control in creating the tissues.

"We can try to combine cells and materials in microfabrication systems to come up with structures and assemble them in particular ways using the techniques Igor has developed," says Khademhosseini.

He envisions fabricating the gels and cells on top of teams of robots working in parallel to construct different parts of a tissue. "We could use the robots to do assembly," he says. "The cells, once they're assembled, come off from the robots, letting cells rearrange further to make things that are indistinguishable from natural tissue." Initially, he hopes to create small patches of heart tissues, and then things like heart muscles and valves, and assemble them all together in a heart. "That's where things are heading," he says. "But right now the challenge is we're still not very good at making each of these individual components."

By Kristina Grifantini
From Technology Review

App-Specific Processors to Fight Dark Silicon

A processor etched with circuits tailored to the most widely used apps on Android phones could help extend the devices' battery life. Researchers at the University of California, San Diego have created software that scans the operating system and a collection of the most popular apps and then generates a processor design tailored to their demands. The result can be 11 times more efficient than today's typical general-purpose smart-phone chip, says Michael Taylor, who leads the GreenDroid project with colleague Steven Swanson. 

 Living Longer: Microprocessors designed around the most-used apps could make smart phones more energy-efficient.

"Chip design for mobile phones needs rethinking for two reasons," says Taylor. "One is to improve their use of the limited energy available to a phone, and the other is to attack a problem called dark silicon, which is set to make conventional chip designs even less efficient."

"Dark silicon" is a portion of a microchip that is left unused. Although uncommon today, dark silicon is expected to become necessary in two or three years, because engineers will be unable to reduce chips' operating voltages any further to offset increases in power consumption and waste heat produced by smaller, faster chips. 

Operating shrinking transistors with lower voltages was "traditionally the escape valve that enabled more computational power without more heat output," says Taylor, "but now there is no place to go." Operating voltages have crept close to a fundamental limit at which transistors cease to function practically. This means that soon, as transistors continue to get smaller, each generation of chips will be less efficient than the one before, he says. "If you kept using all of the chip, each generation would generate double the heat of the one before." Keeping energy use constant will require switching on only certain parts of a chip at any one time.

Taylor and Swanson's GreenDroid design sidesteps this by surrounding a processor's main core—the part of a chip that executes instructions—with 120 smaller ones that each take care of one piece of code frequently needed by the apps used most on a phone. Each core's circuits closely mimic the structure of the code on which they are based, making them up to 10,000 times more efficient than a general-purpose processor core performing the same task. "If you fill the chip with highly specialized cores, then the fraction of the chip that is lit up at one time can be the most energy efficient for that particular task," Taylor says.

Rather than manually translating source code into processor cores, the UCSD team has developed software to do it. They record the computational demands of the Android OS when running popular apps for e-mail, maps, video, and the Web radio service Pandora, among others, and from that information, the software generates the GreenDroid chip design.

Because around 70 percent of that code is shared between multiple apps or parts of the OS, a GreenDroid's specialized cores can handle much of a phone's most energy-sapping work. Detailed simulations of a complete GreenDroid processor prove its superior efficiency, says Taylor. "We're sending the first design off to be fabricated in June and have designed a board so we can plug it in, install Android and apps, and then benchmark against conventional designs," he says. 

Having a custom processor fabricated is extremely expensive and rare in academia. The chip will use transistors smaller than those currently on the market, with feature sizes as small as 28 nanometers. Processors with 32-nanometer features have only recently reached the market, and it is in the next generation, at 22 nanometers, that dark silicon is expected to become a serious challenge.

Kevin Skadron, a professor at the University of Virginia, says the UCSD strategy is a good fit with smart phones, because apps are tightly integrated with a smart phone's OS. "They are wise to target Android," he says, "because on a phone the OS is responsible for a huge amount of the work done by the processor. That means every user of every phone will benefit from their specialized cores." Phones with GreenDroid-style processors can be expected to last longer than conventional phones with the same battery, or to have the same lifetime with a sleeker design, he says.

However, the specialized hardware of this approach has drawbacks that make it less useful for nonmobile devices, says Skadron. "It's more challenging with a PC or server, because the operating system has less effect on what the processor does. The applications on top of that are most important, and they vary a lot more between users."

That drawback of specialization could apply to phones, too—for example, if new apps emerge that are unlike those used to generate a GreenDroid design. "For mobile phones, we're not too worried, because people replace them so quickly," says Taylor. And because upgrades to apps and operating systems tend to be evolutionary rather than revolutionary, he says, it's unlikely that many of a GreenDroid chip's specialized cores would become completely useless during a smart phone's short lifetime. Taylor and Swanson did add features to their design that allow slight tweaking of conservation cores to fit new code, but some upgrades will be too big for that. "If that happens, your phone wouldn't stop working, but its energy efficiency would drop," says Taylor, noting that this prospect probably wouldn't trouble manufacturers too much anyway.

By Tom Simonite
From Technology Review

Controlling Prosthetic Limbs with Electrode Arrays

To design prosthetic limbs with motor control and a sense of touch, researchers have been looking at ways to connect electrodes to nerve endings on the arm or leg and then to translate signals from those nerves into electrical instructions for moving the mechanical limb. However, severed nerve cells on an amputated limb can only grow if a structure is present to support them—much the way a trellis supports a growing vine. And they are notoriously fussy about the shape and size of that structure. 

Coiled conduits: The microscopic channels in this polymer roll are the right size and shape for bundles of severed nerve cells to grow through them. The scaffold, augmented with electrodes, is intended to transmit electrical signals between an amputee’s nervous system and prosthetic limb. 

"Cells are like people: they like furniture to sit in that's just the right size," says David Martin, a biomedical engineer at the University of Delaware. "They're looking for a channel that's got the 'Goldilocks'-length scale to it—how far apart the ridges are, how tall they are, how [wide] they are." 

Ravi Bellamkonda's lab at Georgia Tech has designed a tubular support scaffold with tiny channels that fit snugly around bundles of nerve cells. The group recently tested the structure with dorsal root ganglion cells and presented the results at the Society for Biomaterials conference earlier this month. 

The scaffold begins as a flat sheet with tiny grooves, similar to corrugated iron or cardboard. It is then rolled to form a porous cylinder with many tiny channels suited for healthy nerve-cell growth. The floors of the conduits double as electrodes, brushing up close to the nerve bundles and picking up nerve signals. "The thing that's different is that the patterns can be much more precisely controlled, and the orientation of the nerve bundles is essentially perfect here," says Martin. "It's a nice model system, and the ability to control nerve growth is what's really going to be valuable."  The ultimate goal is to enable two-way communication between the prosthetic limb and the wearer. Eventually, this design could separate the two kinds of nerve cells within a bundle, so neural cues directing hand movement would travel along one channel and information about touch and temperature from the prosthetic limb would travel to the brain along another channel. "The 'jellyroll' should in principle allow [them] to select through those channels—that to me is where the real excitement is," says Martin. "That's news for the future, but you've got to be able to walk before you can run."

In previous attempts to tap into neural signals, scientists have fitted severed nerve cells with "sieve electrodes"—flat metal disks with holes intended for nerves to grow through. "The problem with the sieve electrode is that the nerves wouldn't grow into it reliably," says Bellamkonda. 

Current work on growing aligned nerve bundles includes foam supports with pores suited for nerve growth, and fabrics with aligned nanofibers along which nerves are intended to grow. But the jellyroll design has the potential to be a cut above the rest. 

The multichannel scaffold could give added dexterity to prosthetic limbs. "You need to be able to stimulate as many axons as possible for movement, and you need to be able to pick up signals from as many axons as possible," says Akhil Srinivasan, primary researcher on the project. The most sophisticated of the electrodes currently used at nerve endings have about 16 channels to control movement. But the arm has 22 degrees of freedom. "You need at least 22 reliable channels," says  Mario Romero-Ortega, associate professor of bioengineering at the University of Texas, Arlington. "That's the limitation—we only have a few, but you need more." 

"The novelty, from my perspective, is the materials they use [are ones they can] scale up," Romero-Ortega says. The electrode-roll design builds on previous work, but the new scaffold is made of materials that are safe for biological use. "They're the first to show in vitro growth," Romero-Ortega says.

To make the microarrays, a coat of the polymer polydimethyl siloxane is laid down on a glass slide to create a thin, uniform base, and a layer of a light-sensitive polymer, SU-8, is added. Ultraviolet light is shined on the SU-8 through a grating, and the parts of the surface exposed to the light bond together to form walls. The unbonded sections in between are then washed away, leaving behind row upon row of conduits. The grooved surface is capped with a second layer of base polymer, and the polymer sandwich is rolled into a cylinder. 

So far, the rolled-up microarray still lacks electrodes, but Srinivasan says the next steps will be to insert gold electrodes into the base of the scaffold. The wired microarray will then be tested in a rat model. 

"I think it's a clever design," says Dominique Durand, a professor of biomedical engineering at Case Western Reserve University. "They still haven't shown the electrodes, but that's a problem for another day." 

By Nidhi Subbaraman
From Technology Review

Opening Up the Brain with Ultrasound

The cells lining the brain's blood vessels are tightly packed together—like a good defensive line, they keep bacteria and other blood-borne intruders from getting through, shielding the brain. But this protective layer, called the blood-brain barrier, also thwarts efforts to deliver drugs like chemotherapy agents to the brain, so scientists have long searched for ways to disrupt it selectively to allow treatments in. A startup company called Perfusion Technology is developing a technique to open this barrier by bathing the brain in ultrasound waves.

 A path into the brain: Perfusion Technology is developing a headset (top) that would deliver ultrasound waves throughout the brain, allowing cancer drugs or other large molecules to slip through the blood-brain barrier. At bottom, a section of a monkey's brain after treatment with the device shows that it allowed a chemical marker (brown) to penetrate the brain.

Ultrasound has been investigated for a decade as a tool for opening the blood-brain barrier. Most techniques, however, rely on specialized equipment to focus the ultrasound waves to a tiny point. They also require an injection with microbubbles to amplify the effect, and an MRI machine to guide the treatment. Al Kyle, president and CEO of Perfusion Technology, which is based in Andover, Massachusetts, says that the company's method is simpler and cheaper. Rather than opening the blood-brain barrier briefly at a single point, Perfusion uses a specially designed headset to expose the entire brain to low-intensity ultrasound waves for an hour-long treatment session. 

The company is developing the treatment specifically for patients with brain tumors. A patient could receive the ultrasound during an outpatient session of intravenous chemotherapy, to open the blood-brain barrier and let the drugs into the brain. Kyle says it would be "a kinder and gentler way of delivering therapeutics to the brain" than current invasive methods, such as an infusion pump or a surgical implant. He also believes that his company's approach would be better than focused methods when it comes to treating tumors that have spread to multiple parts of the brain, because it reaches the entire brain at once.  

Although Perfusion is initially developing the technique to treat primary brain tumors, the majority of brain cancers originate elsewhere and metastasize to the brain; in these cases, the technique might help deliver drugs designed for other kinds of cancer into the brain, Kyle says. He further believes the method could someday help treat other kinds of neurological disorders. 

Nathan McDannold, a researcher at Harvard Medical School and Brigham and Women's Hospital who has been developing focused ultrasound for drug delivery to the brain, agrees that if Perfusion Technology's method is proved to work it will have advantages, because it doesn't require microbubble agents and expensive equipment. But the company still needs to prove the safety and effectiveness of its approach. The biggest safety concern is bleeding: when a similar ultrasound method was tested on stroke patients several years ago as a way of dissolving clots, it led to excessive bleeding. 

Kyle says his company has completed five animal studies over the past few years and has used its ultrasound technique to deliver several large molecules safely to the brain, including the cancer drug Avastin. The company hopes to complete preclinical animal studies in the next year and prepare for initial trials in humans.

By Courtney Humphries
From Technology Review

Rumor: LHC Sees Hint of the Higgs Boson

A leaked internal memo from physicists working at the Large Hadron Collider near Geneva reports a whiff of the Higgs boson, the long-sought theoretical particle that could make or break the standard model of particle physics.

The preliminary note, which is still under review, was posted April 21 in an anonymous comment on physicist Peter Woit’s blog, “Not Even Wrong.” Four physicists claim that ATLAS, one of the LHC’s all-purpose particle hunting experiments, caught a Higgs particle decaying into two high-energy photons — but at a much higher rate than the standard model predicts.

“The present result is the first definitive observation of physics beyond the standard model,” the note says. “Exciting new physics, including new particles, may be expected to be found in the very near future.”

The word from CERN, which operates the LHC, is that the leaked note is not an official result, and hasn’t been backed up by the cast of thousands that makes up the rest of the ATLAS collaboration. 

“It’s way, way too early to say if there’s anything in it or not,” said CERN spokesman James Gillies. “The vast majority of these notes get knocked down before they ever see the light of day.”

A member of the ATLAS collaboration who wished to remain anonymous noted that unexpected signals show up in the data pretty frequently, and turn out to be due to errors or biases that went uncorrected. The signal is much more likely to be a fluke than anything else.

The mood in the physics blogosphere is mixed between cautious excitement and outright denial.
“It may well turn out to be a false alarm … or it could be the discovery of the century … stay tuned,” wrote a blogger called Jester at Résonaances, a blog that covers particle theory from Paris. 

But graduate student Sarah Kavassalis at The Language of Bad Physics counters, “Until there is an official statement from the collaboration, or even one of the co-authors, this is just gossip. Don’t get excited. Seriously.”

This isn’t the first time a Higgs rumor has swept the physics community, either. A possible detection came from the CDF experiment at the Tevatron, a particle accelerator at Fermilab in Illinois, in July 2010. Blogger and physicist Tommaso Dorigo notes that CDF ought to have seen this new signal if it’s really there.

Whether the Higgs is there or not, the paper is real. Physicists with access to the paper say it begins, “It is the purpose of this Note to report the first experimental observation at the Large Hadron Collider (LHC) of the Higgs particle.”

“It’s exciting stuff if it’s true,” Gillies said.
The standard model of particle physics is widely regarded as a theory of almost everything, explaining most of what we know about matter with 17 subatomic particles. But only 16 of those particles have been observed. The holdout is the Higgs boson, which was introduced in the 1960s to explain why matter has mass.

Finding the Higgs is one of the main goals of the LHC, a 17-mile-long underground tunnel near the border of France and Switzerland. Protons traveling around this tunnel at near-light speeds smash into each other and create new particles that only exist at very high energies.

These particles quickly decay into a flurry of other, more mundane particles like photons. Detectors like ATLAS and its twin, called CMS, can track the masses and paths that those ordinary particles take, and use their paths to reconstruct what happened in the collision.

The authors of the note, led by physicist Sau Lan Wu of the University of Wisconsin-Madison, say that ATLAS saw two photons whose energies add up to 115 gigaelectronvolts (GeV). That’s the sort of thing you might expect if the Higgs boson had a mass of 115 GeV divided by the speed of light squared. (Because energy and mass are related by Einstein’s famous E=mc2 equation, particle physicists often speak of mass and energy interchangeably. For comparison, a proton has a mass of about 0.9 GeV/c2.)

That mass is suggestive. The LHC’s predecessor, the Large Electron-Positron Collider, or LEP, also may have seen a hint of a Higgs with the same mass in 2000, just weeks before LEP was shut down to make way for the LHC. Wu was also involved in that possible detection.

But if ATLAS really saw something, it’s something decidedly unusual, the researchers report. If an experiment produces the standard model’s version of the Higgs boson, only one in 100,000 of them will decay into two photons. The signal at ATLAS is 30 times bigger than the standard model predicts, meaning either they produced 30 times more Higgs bosons than expected, or 30 in 100,000 of them turned into two photons.

The purported signal could be a signature of supersymmetry, an extension of the standard model in which every particle has a “superpartner” that differs only in a quantum mechanical property called spin. Or it could be a particle that goes beyond the standard model altogether. One candidate is a hypothetical particle called the radion, which is associated with extra dimensions.

“Everybody wants to see something that takes us beyond the standard model,” Gillies said. “Finding a standard-model Higgs at the LHC … from a physics perspective it would be quite a boring thing. There’s a lot of hope and expectation that we can find something beyond the standard model.”

In the meantime, the LHC is charging forward into new territory. Around midnight Swiss time April 22, the LHC set a new record for beam intensity. The collider is scheduled to run at its current energy level, which is only half of what it’s capable of, until the end of 2012. 

That should be plenty of time to tell if the standard model Higgs exists or not, said ATLAS collaboration member Gustaaf Brooijmans of Columbia University in a press briefing in February.

“With the 2012 run added, in principle we believe we can exclude the existence of the Higgs boson over the full range, of course if it doesn’t exist,” he said. “If it exists we would see a small signal. Then it depends on what the mass is, what kind of signal we will see.”

By Lisa Grossman 

Watson Goes to Work in the Hospital

Designed to answer Jeopardy! questions, IBM's Watson is of little use beyond the game show's set. But some of the techniques that helped the computer defeat two human Jeopardy! champions in February are showing promise in a new context: the hospital. Researchers in Canada are using analytics like those that helped the computer decipher the language of clues to provide an early warning when babies in an intensive care unit acquire a hospital-borne infection.

Data deluge: Streams of medical data from babies in the ICU can provide early warning of an infection. 

As you would expect, babies in an ICU are surrounded by equipment that tracks their vital signs, but much of that data is wasted, says Carolyn McGregor, a researcher at the University of Ontario Institute of Technology. "They produce constant streams of data," she says, "but that information is often distilled down to a [nurse's] spot reading every 60 minutes, written on paper."

McGregor leads a project that has developed software to ensure that no scrap of that data goes to waste. At the neonatal ICU of the Hospital for Sick Children in Toronto, that software, dubbed Artemis, collects data from eight infant beds. The system can monitor the baby's electrocardiogram, heart rate, breathing rate, blood oxygen level, temperature, and blood pressure. It can also access data from medical records, such as the baby's birth weight. McGregor and colleagues are developing algorithms that use those signals to spot signs of hospital-borne infection before doctors and nurses do.

Current practice used to diagnose infections in the ICU has a high false-positive rate, which means that many babies are misdiagnosed and receive drugs they don't need, or occupy an ICU bed for longer than necessary. "Babies diagnosed with infection have, on average, a doubling of length of stay," says McGregor. "We want to reduce that."

The researchers have already shown that Artemis can use some of the same clinical observations that doctors use to diagnose babies. For example, the system can spot episodes of apnea (a pause in breathing), which is thought to increase in frequency when an infection sets in, says McGregor. Other research has shown that a variation in heart rate can warn of an infection 24 hours before most other symptoms occur. "We have proposed our own algorithm that uses those, and a wider range of data, to detect signs of infection," says McGregor.

Two slightly different versions of that algorithm are being trialed in the ICU. The results are due to be published later this year. The software's effectiveness will be judged by comparing its decisions and observations with those made by medical staff. Algorithms that attempt to learn new warning signs of infection are also being tested. "No one has had access to all this data before, so we can't always refer to past research," says McGregor.

Artemis is built on an analytics platform called InfoSphere Streams that, like Watson, emerged from IBM research into ways that software can make decisions on the spot using data arriving at a high speed from many different sources.

"The processing paradigms we had before just didn't fit with the kind of streaming data we are dealing with," says McGregor. Software has traditionally performed analysis by systematically scouring a fixed, well-organized store of data, like a person navigating the stacks of a library, she explains.

InfoSphere Streams, in contrast, is based around a newer, alternative model known as stream computing. Information constantly flows into the software, where question-answering algorithms act like filters, pulling out answers from the information available at any particular moment.

That makes it possible to take on data that moves too fast to be written to hard disks, which are relatively sluggish, says Lipyeow Lim, a researcher at the University of Hawaii who previously worked at IBM's TJ Watson laboratory. "As data comes in, you want to look at it only once, then let it go," he says. InfoSphere Streams provides a kind of operating system for that approach, says Lim, sharing the work of implementing a particular program across many computers so the system as a whole can generate answers without committing any data to disk.

That enables the cluster of computers that make up Artemis to keep up with all the different data sources streaming in for different babies. "Monitoring one baby you could probably do with a traditional system and storage design," says Lim. "The challenge comes when you want to monitor many of them."

The same approach enabled Watson to answer questions fast enough to compete with human experts. As soon as it was provided with a new clue, many different natural language processing algorithms set to work in parallel. Their results streamed in to an analytics engine similar to that in InfoSphere Streams which reconciled the different answers and decided on Watson's best response.

McGregor is taking advantage of Artemis's capacity for large amounts of data to develop it into a kind of remote diagnosis resource that can serve neonatal ICUs around the world. "We have implemented a cloud version so that a women's hospital in Rhode Island streams data to my lab over a secure Internet link," she says. Two hospitals in China will connect their neonatal ICUs using this technology later this year.

Meanwhile, machines that more closely resemble the Watson that wowed Jeopardy! viewers are on their own, slower road to the hospital. IBM has begun collaborating with voice-recognition company Nuance to investigate how a Watson-like system that digests research literature, medical records, and doctor's notes might advise clinicians.

By Tom Simonite
From Technology Review

DNA Nanoforms: Miniature Architectural Forms -- Some No Larger Than Viruses -- Constructed Through DNA Origami

Such diminutive forms may ultimately find their way into a wide array of devices, from ultra-tiny computing components to nanomedical sentries used to target and destroy aberrant cells or deliver therapeutics at the cellular or even molecular level.

 Figure 1 a and b display schematics for 2D nanoforms with accompanying AFM images of the resulting structures. 1-c-e represent 3D structures of hemisphere, sphere and ellipsoid, respectively, while figure 1f shows a nanoflask, (each of the structures visualized with TEM imaging).

In the April 15 issue of Science, the Yan group describes an approach that capitalizes on (and extends) the architectural potential of DNA. The new method is an important step in the direction of building nanoscale structures with complex curvature -- a feat that has eluded conventional DNA origami methods. "We are interested in developing a strategy to reproduce nature's complex shapes," said Yan.

The technique of DNA origami was introduced in 2006 by computer scientist Paul W.K. Rothemund of Caltech. It relies on the self-assembling properties of DNA's four complementary base pairs, which fasten together the strands of the molecule's famous double-helix. When these nucleotides, labeled A, T, C, and G, interact, they join to one another according to a simple formula -- A always pairs with T and C with G.

Nanodesigners like Yan treat the DNA molecule as a versatile construction material -- one they hope to borrow from nature and adapt for new purposes.

In traditional DNA origami, a two-dimensional shape is first conceptualized and drawn. This polygonal outline is then filled in using short segments of double-stranded DNA, arranged in parallel. These segments may be likened to pixels -- digital elements used to create words and images displayed on a computer screen.

Indeed, Rothemund and others were able to use pixel-like segments of DNA to compose a variety of elegant 2-dimensional shapes, (stars, rhomboids, snowflake forms, smiley faces, simple words and even maps), as well as some rudimentary 3-dimensional structures. Each of these relies on the simple rules of self-assembly guiding nucleotide base paring.

Once the desired shape has been framed by a length of single-stranded DNA, short DNA "staple strands" integrate the structure and act as the glue to hold the desired shape together. The nucleotide sequence of the scaffold strand is composed in such a way that it runs through every helix in the design, like a serpentine thread knitting together a patchwork of fabric. Further reinforcement is provided by the staple strands, which are also pre-designed to attach to desired regions of the finished structure, through base pairing.

"To make curved objects requires moving beyond the approximation of curvature by rectangular pixels. People in the field are interested in this problem. For example, William Shih's group at Harvard Medical School recently used targeted insertion and deletion of base pairs in selected segments within a 3D building block to induce the desired curvature. Nevertheless, it remains a daunting task to engineer subtle curvatures on a 3D surface, " stated Yan.

"Our goal is to develop design principles that will allow researchers to model arbitrary 3D shapes with control over the degree of surface curvature. In an escape from a rigid lattice model, our versatile strategy begins by defining the desired surface features of a target object with the scaffold, followed by manipulation of DNA conformation and shaping of crossover networks to achieve the design," Liu said.

To achive this idea, Yan's graduate student Dongran Han began by making simple 2-dimensional concentric ring structures, each ring formed from a DNA double helix. The concentric rings are bound together by means of strategically placed crossover points. These are regions where one of the strands in a given double helix switches to an adjacent ring, bridging the gap between concentric helices. Such crossovers help maintain the structure of concentric rings, preventing the DNA from extending.

Varying the number of nucleotides between crossover points and the placement of crossovers allows the designer to combine sharp and rounded elements in a single 2D form, as may be seen in figure 1 a & b, (with accompanying images produced by atomic force microscopy, revealing the actual structures that formed through self-assembly). A variety of such 2D designs, including an opened 9-layer ring and a three-pointed star, were produced.

The network of crossover points can also be designed in such a way as to produce combinations of in-plane and out-of-plane curvature, allowing for the design of curved 3D nanostructures. While this method shows considerable versatility, the range of curvature is still limited for standard B form DNA, which will not tolerate large deviations from its preferred configuration -- 10.5 base pairs/turn. However, as Jeanette Nangreave, one of the paper's co-authors explains, "Hao recognized that if you could slightly over twist or under twist these helices, you could produce different bending angles."

Combining the method of concentric helices with such non-B-form DNA (with 9-12 base pairs/turn), enabled the group to produce sophisticated forms, including spheres, hemispheres, ellipsoid shells and finally -- as a tour de force of nanodesign -- a round-bottomed nanoflask, which appears unmistakably in a series of startling transmission electron microscopy images (see figure 1, c-f ).

"This is a good example of teamwork in which each member brings their unique skills to the project to make things happen." The other authors include Suchetan Pal and Zhengtao Deng, who also made significant contributions in imaging the structures.

Yan hopes to further expand the range of nanoforms possible through the new technique. Eventually, this will require longer lengths of single-stranded DNA able to provide necessary scaffolding for larger, more elaborate structures. He credits his brilliant student (and the paper's first author) Dongran Han with a remarkable ability to conceptualize 2- and 3D nanoforms and to navigate the often-perplexing details of their design. Ultimately however, more sophisticated nanoarchitectures will require computer-aided design programs -- an area the team is actively pursuing.

The successful construction of closed, 3D nanoforms like the sphere has opened the door to many exciting possibilities for the technology, particularly in the biomedical realm. Nanospheres could be introduced into living cells for example, releasing their contents under the influence of endonucleases or other digestive components. Another strategy might use such spheres as nanoreactors -- sites where chemicals or functional groups could be brought together to accelerate reactions or carry out other chemical manipulations.


Reprogrammable Chips Could Enable Instant Gadget Upgrades

Obsolescence is the curse of electronics: no sooner have you bought a gadget than its hardware is outdated. A new, low cost type of microchip that can rearrange its design on the fly could change that. The logic gates on the chip can be reconfigured to implement an improved design as soon as it becomes available—the hardware equivalent of the software upgrades often pushed out to gadgets like phones.

 Reprogrammable: This chip can be reconfigured to implement new designs, thus allowing device hardware to be upgraded.

The new chips—made by a startup called Tabula—are a cheaper, more powerful competitor to an existing type of reprogrammable chip known as a field programmable gate array (FPGA). FPGAs are sometimes shipped in finished devices when that is cheaper than building a new chip from scratch—usually things that are expensive and sell in low volumes such as CT scanners. More commonly, FPGAs simply provide a way to prototype a design before making a conventional fixed microchip.

If programmable chips were more powerful and less costly they could be used in more devices, in more creative ways, says Steve Teig, founder and chief technology officer of Tabula. His company's reprogrammable design is considerably smaller than that of an FPGA. "FPGAs are very expensive because they are large pieces of silicon," says Teig, "and silicon [wafer] costs roughly $1 billion an acre." The time it takes for signals to traverse the relatively large surface of an FPGA also limits its performance, he says.

"It's like being inside a very large, one story building—the miles of corridors slow you down," he says. As with a building, stacking layers of circuit on top of each other helps, by providing a shortcut between floors, says Teig. But unfortunately, the technology needed to build stacked, 3-D chips is still restricted to research labs. Instead Teig found a way to make a chip with just one level behave as if it were eight different ones stacked up. "Imagine you walked into the elevator in a building and then walked back out, and that I rearranged the furniture quickly while you were in there," says Teig. "You would have no way to tell you weren't on a different floor." Tabula's chips perform the same trick on the data they process, cycling between up to eight different layouts at up to 1.6 billion times per second (1.6 Gigahertz). Signals on the chip encounter those different designs in turn, as if they were hopping up a level onto a different chip entirely. "From its behavior, our [design] is indistinguishable from a stack of chips," says Teig, who calls the virtual chip layers "folds." 

That brings speed advantages, because signals don't have to travel a long way across the surface of a chip to reach new part of circuit, as they do on an FPGA. When the chip loads a new fold, new circuitry appears in place of the old. Teig estimates that the footprint of a Tabula chip is less than a third of an equivalent FPGA, making it five times cheaper to make, while providing more than double the density of logic and roughly four times the performance. 

As with FPGAs, Tabula's chips contain arrays of many identical basic building blocks that can be programmed to implement any logic function. A memory store on the chip manages the different configurations that the chip cycles through.

Teig's approach makes sense, says Andre DeHon, who researches reconfigurable hardware at the University of Pennsylvania and has experimented with similar designs himself. Most of the area on an FPGA chip is made up of the wiring needed to connect the elements that do the work, he says. "This new type of design can run faster and avoids parts just sitting there while signals run down long paths."

Tabula could push reconfigurable silicon to displace conventional, fixed design chips in more places, says DeHon. Making a custom chip requires a guarantee of a few million units, says DeHon, and hence an upfront cost of several million dollars. "It's a matter of moving the crossover point between the cost of that and the cost of reconfigurable technology."

Making the reconfigurable approach cheaper could enable even consumer electronics to ship with programmable chips, making it possible for them to be upgraded with new design tweaks. That approach is currently used only in some expensive equipment such as cell-phone base stations. "Sony could say, 'look at what our competitor Toshiba did', and upgrade the chips inside their TVs to provide new features," says Teig. "Getting to digital cameras or TVs is definitely within reach."

However, Rich Wawzyrniak, who tracks FPGAs and related technology for analyst firm Semico Research, points out that there are limitations to this approach. "The power consumption if these devices is relatively high, and likely too much for a device like a phone," he says.

But ultimately, says DeHon, reconfigurable chips should morph their design even more often, shifting their workings to match the task in hand in a blend of software and hardware. "These things are really platforms that can run any computation. The grand vision is that we come up with a way for a program's code to be mapped to the chip when it runs."

By Tom Simonite
From Technology Review

An Up-Close View of Schizophrenia

Reprogrammed cells generated from people with schizophrenia could help scientists study the disease more closely, according to a study published online today in Nature. Such cells would allow scientists to look at the disease on a cellular level, and also test potential drugs to combat the condition.

 Unsociable cells: These neurons, derived from reprogrammed stem cells from schizophrenia patients, form fewer connections than those from people without the disease. Cell nuclei are shown in blue, and branched fibers connecting neurons are green and red.

Researchers from the Salk Institute for Biological Studies began with skin cells taken from schizophrenic patients, which they reprogrammed to create induced pluripotent stem (iPS) cells—adult cells that have been transformed chemically or genetically into stem cells capable of giving rise to any type of tissue. They then coaxed those cells to differentiate into neurons. Scientists found that the diseased neurons made fewer connections with one another than did healthy neurons—a problem that antischizophrenia medication could alleviate.

The study is one of several recent papers showing that iPS cells derived from patients with specific diseases could give new insight into those complex diseases. Previous studies on iPS-derived neurons have focused on diseases with specific genetic mutations, and those that develop in early childhood.

Schizophrenia, however, is a more complex disease. It has both genetic and environmental origins, and often develops in adolescence or early adulthood. "This paper opens up the possibility that even psychiatric diseases can be potentially investigated using these cell models," says Kwang-Soo Kim, a stem-cell scientist at McLean Hospital and Harvard Medical School who was not involved in the study.

Fred Gage, a neuroscientist who led the Salk Institute study, says that much of what is known about differences in the brains of schizophrenia patients comes from examining brain tissue after death. Scientists can also use animal models engineered to mimic some of the genetic changes linked to the disease to study the impact of these mutations, but such models don't capture the full complexity of schizophrenia. 

But with neurons created from reprogrammed skin cells, Gage says, "the advantage is you're looking for the first time at living neurons from patients who have the disease."  

The researchers used skin cells taken from four patients with schizophrenia to create iPS cells, which they then differentiated into neurons. They compared these cells to neurons derived from people without the disease. 

After infecting cells with a modified rabies virus and then watching the spread of the virus from cell to cell, the researchers found that cells from people with schizophrenia formed fewer connections with one another, and made fewer projections to reach out to other cells. The researchers also performed an analysis of gene activity in the cells, and identified nearly 600 genes that had activity different from cells taken from people without schizophrenia.. Only about a quarter of these 600 genes had already been identified in studies of postmortem tissue. 

The team then tested five known schizophrenia drugs to see whether they could restore the cells' connectivity. After three weeks of treatment, only one drug, the antipsychotic loxapine, improved connectivity in all of the patients' cells. Gage says the cells could even be used to test how individual patients might respond to specific treatments. 

"This study illustrates that iPS cells could be really useful models to study these diseases at the cellular and molecular levels," says Kim. However, questions remain about how well these cells represent neurons in living brains. He says that further research should focus on creating iPS cells using newer techniques that don't genetically alter cells, and differentiating them into more specific types of neurons. 

By Courtney Humphries
From Technology Review

Speeding Up the Healing Process

We take it for granted that cuts, bruises, and scrapes will heal over time, but chronic, nonhealing wounds are a major health problem for millions of people, and the slow pace of normal wound healing leaves the body open to life-threatening infections. Researchers at Tufts University are developing agents that, applied to open sores, could someday help chronic wounds heal successfully, and speed the normal healing process.  

 Cells on the move: Blood vessel cells are shown migrating from left to right in response to injury. The red label shows the structures that cells use to move around.

The wound-healing agents target angiogenesis, the process of blood vessel growth. "If you can't build new blood vessels, it's virtually impossible to heal," says Ira Herman, the project's leader and director of the Tufts Center for Innovations in Wound Healing Research. When tissue is damaged, cells migrate into the wounded region and then proliferate to form new vessels that supply oxygen and nutrients to the upper layer of skin. This is one of the processes that stall in chronic wounds.

Two decades ago, Herman and colleagues first showed that an enzyme called collagenase, produced by the bacterium Clostridium histolyticum, could promote the healing process in cultured cells and animals. When added to cultured cells, it spurred the cells to crawl and grow faster. "It essentially created track stars out of laggards," Herman says. Although humans also produce collagenase, the bacterial enzyme was more effective. The enzyme digests collagen, creating small protein fragments called peptides. The researchers believe that the peptides created by the bacterial enzyme cause a more robust response from cells.

The researchers analyzed which peptides were unique products of the bacterial enzyme, and synthesized several of them to see if they could promote wound healing on their own. Herman says peptides would be easier than enzymes to produce and deliver as treatments. It would also be easier to control their effects, he says. In a paper published last September in Wound Repair and Regeneration, the team demonstrated that the peptides increase cell proliferation and angiogenesis in cell models that have multiple layers of cells to mimic the structure of skin and the underlying blood vessels. Herman says that the team has since had promising results testing the agents in animal models, and he hopes to move the technology toward human trials. He envisions that the peptides could be sprinkled over wounds as dry particles or suspended in a gel.
Robert Kirsner, chief of dermatology at the University of Miami Hospital, says that the work "lends insight into novel repair mechanisms, which, if capitalized upon, can hopefully lead to new opportunities for faster and better healing." 

Elizabeth Ayello, a nurse and wound-care expert at the Excelsior College School of Nursing, says, "What's intriguing about this is it speeds up the healing," which Ayello says could potentially reduce scarring and infections. Such a treatment could be used on battlefields or in rural areas that lack easy access to hospitals. The acceleration of healing, she says, is particularly exciting given the high costs and significant pain that wounds cause patients. 

By Courtney Humphries
From Technology Review

Speedier Nanotube Circuits

Researchers have developed a new way of producing very dense arrays of carbon nanotubes suitable for making complex integrated circuits.

Nanotube transistors have shown great promise in simple experimental prototypes, but making them into the complex circuits—needed for the chips that run computers and cell phones—has proven tricky. Researchers at Stanford University are using the new fabrication method to build ever more complex circuits that they hope will soon rival the speed of silicon.

 Dense tubes: Speedier integrated circuits can be made from these densely packed arrays of carbon nanotubes.

For years, computer scientists have worried that, as they continue to miniaturize silicon transistors in order to cram more computing power in ever-smaller spaces, they will come up against the material's physical limits. Many replacements are being explored, including exotic semiconductors and another form of carbon called graphene. For digital logic, though, many believe carbon nanotubes show the greatest promise. "No other material has shown the same ability to scale down as aggressively as carbon nanotubes can," says Aaron Franklin, a researcher at IBM's Watson Research Center in Yorktown Heights, New York. When silicon transistors are miniaturized to a comparable size scale, they become leaky and unstable.

However, making a single high-performance nanotube transistor is one thing; making a large array of nanotubes integrated into a circuit is quite another. In the past few years, researchers at Stanford have made some of the most complex nanotube circuits yet. They developed workarounds to overcome carbon nanotubes' imperfections—the presence of metallic tubes amongst the semiconducting ones needed for transistors, for example. But these circuits were still relatively simple, performing arithmetic at about the level silicon circuits achieved in the 1960s.

The bottleneck has been making dense arrays of well-aligned carbon nanotubes. Stanford professors H-S Philip Wong and Subhasish Mitra previously used a stamping technique to transfer well-aligned nanotubes grown on quartz to a silicon-dioxide wafer for fabrication into transistor arrays and circuits. But there weren't very many nanotubes in each transistor to carry the current, and the low-current transistors didn't have high enough output to be made into complex circuits. That's because the Stanford researchers were only able to do one transfer step before the nanotubes became tangled up into a mess that couldn't be made into a transistor. When researchers try to lay down more nanotubes hoping for a proportional increase in current, says Mitra, "all sorts of weird interactions happen and sometimes you actually get less current."

The same researchers have now developed a method for holding down layers of nanotubes while more layers are deposited on top. "The nanotubes are like fragile threads that want to interact with one another," says Mitra. "We had to add a thin layer of a solid in between to protect them."

The Stanford researchers put down a thin layer of gold with each stamp. Once the gold and nanotubes are in place, the researchers etch away areas of gold where they want to place each transistor's electrical contacts. They then fill these holes with a metal contact material such titanium and palladium. Finally, they etch away the rest of the gold. This entire structure is built up on top of a silicon dioxide wafer patterned with back gates for the transistors. This work is described online this week in the journal Nano Letters.

Mitra says it should be possible to perform several transfers for for a density of 100 to 200 nanotubes per micrometer. The transfer technique is also compatible with techniques the group has developed in the past to deal with stray metallic nanotubes and the occasional misplaced nanotube.

"Nanotubes are messy to work with today," says Franklin. But "there is no fundamental bottleneck that says this can't be done."

By Katherine Bourzac
From Technology Review

Growing Eyeballs

A clump of mouse embryonic stem cells can self-organize into three-dimensional structures reminiscent of the retina in the early stages of embryonic development, according to a new study published Wednesday in Nature. Researchers believe this process could one day serve as a source of cells to transplant into diseased and damaged retinas—a potential way to restore sight to the blind.

 Keep an eye out: From an undifferentiated ball of embryonic stem cells (gray) floating in a culture dish, two cup-shaped structures (green) resembling the embryonic retina emerge spontaneously. Structures like these could provide a reliable source of retinal tissue for blind patients.

Researchers at the RIKEN Center for Developmental Biology in Kobe, Japan, began with clusters of about 3,000 mouse embryonic stem cells floating in a mix of chemicals designed to spur differentiation into retinal cells. After a week, several balloon-like sacs of cells began to protrude from the surface of each cluster. Over the next few days, those sacs pouched inward on themselves to form structures resembling the optic cup—the complex dual-layered structure that emerges early in development and eventually becomes the retina.

This was surprising because the cells were not coaxed in any way to form such a structure, says Yoshiki Sasai, director of the Organogenesis and Neurogenesis Group at RIKEN and lead author of the study. From a homogeneous mass of embryonic stem cells, a sophisticated three-dimensional tissue spontaneously emerged. In other words, says Sasai, "the shape of an organ is actually internally programmed." This concept could have profound implications for the field of stem cell research; Sasai suspects that self-organization is possible for other tissue types as well. If that's true, scientists could potentially grow many kinds of organs in the laboratory.

"I think it really represents a significant step forward," says Jane Sowden, a developmental biologist at University College London. Sowden was not involved in the study. "Previous studies have shown that it's possible to differentiate [embryonic stem] cells into distinct types of retinal cells, but what we're seeing in this study is the potential of these cells to organize themselves into tissue of the retina."
The research may also have a more immediate impact on treatments for diseases in which the photoreceptor cells of the eye are damaged or destroyed. Sowden's group has found that transplanting photoreceptor precursors—a type of cell that appears early in development—could restore sight in these cases. But it has been difficult to obtain large numbers of these cells. Because the structures in the new study developed according to a predictable pattern and timeline, Sowden says, they could provide an ideal source of photoreceptor precursors for transplantation.

Sasai also believes the entire inner sheath of the optic cup-like structure—which contains photoreceptor cells arranged in the particular intricate layout that is necessary for sight—could be transplanted. His group has already had some success with this approach in mice, and they hope to have a human version of the system within two years. "The ability to replicate organogenesis in the lab could revolutionize stem cell medicine," says Robert Lanza, chief scientific officer at Advanced Cell Technology. Lanza was not involved in the study. "By understanding the necessary signals and forces, we can hopefully develop similar culture systems to encourage the self-directed organization of other complex tissue and organ systems needed for transplantation." 

By Jocelyn Rice
From Technology Review

Engineered Mice Make Better Choices

A study published online this week in Nature finds that mice engineered to produce more new neurons in the hippocampus—a structure involved in learning and memory—are better at discriminating between similar choices. The study adds new evidence for a link between the development of new neurons in the hippocampus and cognitive functions in the brain, and it also suggests how these neurons may affect mood disorders.

Nervy mice: Researchers at Columbia University engineered mice to produce more new hippocampal neurons in adulthood in order to understand the role of these neurons in cognition and mood. Here the hippocampus of an adult engineered mouse (right) contains more newly developed neurons (light green blotches) than that of a control mouse (left). 

Although scientists once believed that the adult brain could only lose neurons, research over the past several years has shown not only that new neurons form regularly—a process called adult neurogenesis—but also that they contribute to brain function and are diminished in certain diseases and disorders. Drugs that can boost adult neurogenesis are currently being investigated as treatments for depression, anxiety, and neurodegenerative disease. 

The exact role of new neurons, however, is still being debated. Previous studies found that blocking adult neurogenesis in the hippocampus prevented animals from discriminating between similar choices—a process called pattern separation. Amar Sahay, a postdoctoral fellow in the lab of neuroscientist René Hen at Columbia University, says that the current study sought to find out whether selectively boosting the production of new neurons in the adult hippocampus would have the opposite effect. The researchers engineered mice with a genetic switch that would turn off a gene that kills most new neurons in the adult hippocampus, thereby allowing more of these neurons to proliferate. The switch was turned on when the mice were injected with a specific drug, allowing the researchers to intervene only in adulthood.  

The engineered mice performed better at a task that required them to distinguish between a chamber in which they had previously received an electric shock and a similar one with slightly different features that they'd experienced as safe. Sahay explains that pattern separation "is a mnemonic process that we use on a day-to-day basis in navigating our environments" and that it is needed to form memories and make judgments.
Previous work in Hen's lab had shown that blocking adult neurogenesis in mice made them unresponsive to antidepressant medications. In this study, boosting new neurons did not immediately produce behaviors that suggested an antianxiety or antidepressant effect, but when the mice were allowed to run on an exercise wheel for four weeks—another known booster of new neurons—they displayed more exploratory behavior than control mice who also exercised, suggesting that environmental stimulation worked in concert with neurogenesis.

Fred Gage, a neuroscientist at the Salk Institute, says that the study "supports the emerging view that there seems to be a rather specific role for adult neurogenesis in pattern separation." The question of how neurogenesis affects mood is less clear. Hen and Sahay propose that pattern separation may play a role in mood disorders. Sahay says that this important cognitive ability declines with age and is impaired in conditions marked by anxiety, such as panic disorders and post-traumatic stress disorder. People with anxiety, he says, "may clump previously aversive memories with new experiences that are safe." Gage says this hypothesis, if proved true, may help explain why antidepressants help only certain patients. "It may be that there are cognitive components" of mood disorders like anxiety and depression, he says.

By Courtney Humphries
From Technology Review

Leaked pics show new Windows 8 interface

Microsoft's released a preview build of Windows 8 to some of its partners - and several of those partners have helpfully released it to the press, revealing a new ribbon interface. The ribbon is similar to that used previously in Microsoft Office, and replaces drop-down menus with a tabbed toolbar at the top of the page. Some people will love it for its clarity; others may object that it takes up an awful lot of space.

The inclusion of the ribbon may be a nod to the growing popularity of tablet PCs with their touch-based interface - Microsoft's already confirmed that Windows 8 will run on tablets.

And according to the Within Windows site, tablet users will be able to log on using a pattern, in much the same way as Android users can now. 

It adds that Microsoft is also planning to support extensible Welcome Screen-based audio controls, allowing users to control music playback while the machine is locked.

Obviously, the interface is in the early stages of development, and in theory the ribbon might not make it through to the final build. But it's hardly an experiment for the company, indicating that it wouldn't have made an appearance in the preview version unless Microsoft was pretty definite about using it.

Also spotted is a new Welcome screen similar to the Metro design used in Zone and Windows Phone. It appears to show the time, day of the week and date, along with icons for power management and ease of access.

There's no date yet for the release of Windows 8, but the smart money's on early next year.

By Kate Taylor 

New Type of Drug Kills Antibiotic-Resistant Bacteria

Researchers at IBM are designing nanoparticles that kill bacteria by poking holes in them. The scientists hope that the microbes are less likely to develop resistance to this type of drug, which means it could be used to combat the emerging problem of antibiotic resistance. This type of drug has not had much success in clinical trials in the past, but initial tests of the nanoparticles in animals are promising.

 Nano killer: This drug-resistant staph bacterium has been split open and destroyed by an antimicrobial nanoparticle.

Drug-resistant bacteria have become a major problem. In 2005, nearly 95,000 people in the United States developed a life-threatening staph infection resistant to multiple antibiotics, according to the U.S. Centers for Disease Control and Prevention. It takes just one to two decades for microbes to develop resistance to traditional antibiotics that target a particular metabolic pathway inside the cell, says Mary B. Chan-Park, professor of chemical and biological engineering at Nanyang Technological University in Singapore, who was not involved with the research. In contrast, drugs that compromise microbes' cell membranes are believed to be less likely, or slower, to evoke resistance, she says.

"We're trying to generate polymers that interact with microbes in a very different way than traditional antibiotics," says James Hedrick, a materials scientist at IBM's Almaden Lab in San Jose, California. To do this, Hedrick's research group took advantage of past work on a library of polymer building blocks that can be mixed and matched to make complex nanoparticles. To make a nanoparticle that would selectively attack bacterial membranes and then break down harmlessly inside the body, the IBM group put together three types of building blocks. At the center of the polymer sequence is a backbone element that's water-soluble and tailored to interact with bacterial membranes. At either end of the backbone is a hydrophobic sequence. When a small amount of these polymer chains are added to water, the differences between the ends and the middle of the sequence drive the polymers to self-assemble into spherical nanoparticles whose shell is entirely made up of the part that will interact with bacterial cells. This work is described this week in the journal Nature Chemistry.

IBM's labs aren't equipped for biological tests, so the researchers collaborated with Yi Yan Yang at the Singapore Institute of Bioengineering and Nanotechnology to test the nanoparticles. They found that the nanoparticles could burst open and kill gram-positive bacteria, a large class of microbes that includes drug-resistant staph. The nanoparticles also killed fungi. Other types of deadly bacteria that have different types of cell membranes would not be vulnerable to these nanoparticles, but the IBM researchers say they are developing nanoparticles that can target these bacteria, too, though it is more difficult. "Through molecular tailoring," says Robert Allen, senior manager of materials chemistry at IBM Almaden, "we can do all sorts of things"—designing particles with a particular shape, charge, water solubility, or other property.

The IBM researchers believe the drug could be injected intravenously to treat people with life-threatening infections. Or it could be made into a gel that could be applied to wounds to treat or prevent infection.
However, Chan-Park cautions, other drugs that work by this membrane-piercing mechanism have not been very successful so far. Those that have shown early promise on the lab bench either were toxic to animal cells or simply didn't work in the complex environment of the human body.

More tests will be needed to say definitively whether the nanoparticles are safe and will work in people. Initial tests of the IBM particles with human blood cells and in live mice have been promising. Allen says the nanoparticles didn't interact with human blood cells because their electrical charge is significantly greater than that of bacterial cells. There were no signs of toxicity in mice injected with the particles, and none of them died. 

In addition to developing nanoparticles that can attack other types of bacteria, the IBM group is working on making larger quantities of the designer polymers, scaling up from the current two-gram capacity to the kilogram quantities needed for larger clinical tests. IBM won't be getting into the pharmaceutical business, says Allen, but the company plans to partner with a healthcare company to license the polymer drugs.

By Katherine Bourzac
From Technology Review