Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........


Particles can be quantum entangled through time as well as space

Quantum entanglement says that two particles can become intertwined so that they always share the same properties, even if they're separated in space. Now it seems particles can be entangled in time, too. Who's ready for some serious quantum weirdness?


Of all the ideas in modern physics, quantum entanglement is a serious contender for the absolute strangest. Basically, entangled particles share all their quantum properties, even if they are separated by massive distances in space. The really odd part is that any changes made to the properties of one particle will instantly occur in the other particle. There are some subtle reasons why this doesn't actually violate the speed of light, but here's the short version: this is all very, very bizarre.

But all experiments in quantum entanglement have focused exclusively on spatial entanglement, because seriously...isn't this already weird enough? Apparently not for physicists S. Jay Olson and Timothy C. Ralph of Australia's University of Queensland, who have figured out a series of thought experiments about how to entangle particles across time.

Now, what the hell does that mean? Well, Olson explains:

"Essentially, a detector in the past is able to ‘capture' some information on the state of the quantum field in the past, and carry it forward in time to the future — this is information that would ordinarily escape to a distant region of spacetime at the speed of light. When another detector then captures information on the state of the field in the future at the same spatial location, the two detectors can then be compared side-by-side to see if their state has become entangled in the usual sense that people are familiar with — and we find that indeed they should be entangled. This process thus takes a seemingly exotic, new concept (timelike entanglement in the field) and converts it into a familiar one (standard entanglement of two detectors at a given time in the future)."

That may still be a bit confusing, so think about it this way. The detectors are basically taking on the properties of their particles - if they share the same properties, then the particles themselves are entangled. The first, "past" detector stores one set of quantum properties, and then the second, "future" detector measures a new set of properties at the same location as the first. The two sets of quantum properties are affecting each other just like spatially entangled particles share the same properties, but now it's happening across time instead. Once the two detectors are brought together in time, the entanglement becomes the more normal (well, relatively speaking) sort of spatial entanglement.

This may seem difficult to comprehend - I know I'm struggling with it - but that's because we're accustomed to temporal events always being completely independent of one another. Both types of entanglement are counter-intuitive, to be sure, but it's easier for us to imagine particles sharing properties in different parts of space than it is different parts of time because we ourselves move through space so easily. And yet, from a physics perspective, there isn't all that much of a difference between space and time, and certainly not enough to rule out temporal entanglement.

Now, this is all still just hypothetical for the time being, but there is a theoretical basis for this and it may soon be possible to probe these ideas further with some experiments. Still, if you're up for a bit of extra credit weirdness, here's Olson and Ralph's thought experiment for teleportation through time. Let's say you want to move a quantum state, or qubit, through time. You'll need one detector coupled to a field in the "past" and another coupled to the same field in the "future." The first detector stores the information on the qubit and generates some data on how the qubit can be found again. The qubit is then teleported through time, effectively skipping the period in between the past and future detectors.

The first detector is removed and the second detector is put in precisely the same place, keeping the spatial symmetry in tact. The second detector eventually receives the necessary information from the first, and then it uses this to bring the qubit back, reconstructing it in the future. There's a weird time symmetry to all this - let's say the qubit is teleported at 12:00 and the first detector gather its information at 11:45. That fifteen-minute gap must exist in both direction, and it's impossible to reconstruct the qubit until 12:15 rolls around.

Obviously, these are all deeply strange, epically counter-intuitive ideas right at the bleeding edge of what modern physics can conceptualize. But it's also very awesome. And as soon as I even begin to understand it, I'm sure it'll get even more awesome.


German Researchers Build Terminator Robot Hand


Warning: Do not watch this video if you lay awake at night, kept from sleep by the terrifying knowledge that one day soon the human race will be thrown into slavery by The Machines. For the more naïve amongst us, here’s the clip:




Oh, I forgot to say that if you don’t like to see a finger being locked in a vice and then whacked with a metal bar, you probably shouldn’t watch, either. Sorry.

The robot hand you see is German made, by researchers at the Institute of Robotics and Mechatronics. In building the first part of the Terminator, the researchers were going for robustness, and they appear to have achieved quite chilling success.

Not that utility has been traded for toughness: As the video shows, the hand is capable of an astonishing range of movement. The fingers are controlled by 38 tendons, each of which is driven by its own motor inside the forearm. Two tendons serve each joint. When their motors turn the same way, the joint moves. When they turn in opposite directions, the joint stiffens. This lets it toughen up to catch balls, yet be loose enough to perform delicate operations.

During tests, the researchers went all Joe Pesci on their robot creation, and took a baseball bat to the arm — a 66G whack. The result? Nothing. The hand came away unscathed.

Not only can the hand take punishment, it can also deal it out, exerting up to 30 Newtons of pressure with its fingers, plenty for either a stimulating massage or a deadly choking grip. It is also fast. The joints can spin at 500-degrees per second. If it tenses the springs joined to the tendons first, and then releases that energy, the joints can reach a head-spinning 2,000-degrees per second, or 333 rpm. That’s fast enough for it to snap its fingers and summon a human slave to do its bidding.

It doesn’t stop there. The head of the hand team, Markus Grebenstein (don’t you just wish it was Grabenstein?), says that the plan is to build a torso with two arms. His excuse? According to an interview with IEEE Spectrum, Grebenstein says that “The problem is, you can’t learn without experimenting.”
Yes you can, Mr. Grebenstein. Just watch Terminator 2.


By Charlie Sorrel 
From wired.com

Fat Cells for Broken Hearts

Too much fat around the waist may be bad for your health, but the stem cells it contains might one day save your life. Starting this month, a new European trial aims to determine whether stem cells harvested from a person's own fat, delivered shortly after a heart attack, could prevent some of the cardiac muscle damage that results from blocked arteries.

 A cellular solution: Cytori’s “Celution” machine (top) processes a sample of fat—as small as a golf ball or as large as a soda can—to isolate its stem cells and other potentially regenerative cells in less than an hour. After processing the fat, a scientist withdraws a pink slurry into a syringe (bottom). The fluid, which will later be injected into the heart, contains as many as a million stem cells per teaspoon.

During a heart attack, blood vessels that deliver blood to the heart muscle are blocked, and the lack of oxygen slowly kills the tissue. San Diego-based Cytori Therapeutics has developed a treatment that aims to prevent much of that muscle damage before it starts. It works by injecting a concentrated slurry of stem cells and other regenerative cells isolated from the patient's body directly into the heart's main artery within 24 hours after an attack. "Time is muscle. The quicker you get in, the better," says Christopher Calhoun, Cytori's chief executive officer. "You can't do anything about dead tissue, but tissue that's bruised and damaged—that's revitalizable. If you can get new blood flow in there, that tissue comes back to life."

Adult stem cells, which exist in small populations throughout the body, can differentiate to form specific tissue types and are responsible for repairing injuries and replacing dying cells. The prospect of using them to heal damaged heart muscle has tantalized biomedical researchers for more than two decades. If the stem cells come from a patient's own body, there is no risk of rejection. A number of clinical trials in recent years have focused on using stem cells collected from bone marrow, since this potent population can differentiate into both cardiac muscle cells and blood vessel cells, among other types. But marrow stem cells are difficult to collect and somewhat scarce; they must be isolated and then grown in culture before they're injected back into an injured heart. The process can take weeks. 

Over the last decade, however, researchers have discovered that fat tissue has its own population of stem cells that are more easily accessible and far more abundant than the ones in bone marrow. A typical sample of bone marrow yields about 5,000 stem cells; a sample of fat, gathered quickly through liposuction, can provide up to 200 times that amount. Fat also contains other cells that may aid the healing process. "My belief is that you need that mixed population of cells in order for this therapy to be effective," says Stuart Williams, director of the Cardiovascular Innovation Institute in Louisville, Kentucky, who is not affiliated with Cytori. "Everybody who is now seeing success with the use of these cells—almost all of them—are using a mixed population." 

Fat breakdown: The cells Cytori is interested in live in the spaces between the fat cells. The company’s machine uses enzymes to decompose the fat and free up these cells for harvesting.

Cytori has created a portable machine that, in less than an hour, can reduce a sample of fat "about the size of a can of Coke," Calhoun says, to less than a teaspoon of concentrated slurry that Cytori believes contains its most vital elements: stem cells, smooth muscle cells, cells that line blood vessels, and a number of other regenerative cells that can promote growth healing. 

"Fat tissue has been used for years by very astute surgeons, who just pulled pieces of fat into areas that weren't healing well," Williams says. "All the data so far has shown that these cells are safe, but beyond that, what are these cells doing? We just don't know."

One benefit of Cytori's therapy is that unlike stem cells derived from bone marrow, it can be used immediately after a heart attack, before much tissue damage has occurred. The regenerative cells from fat appear to release growth factors and other chemicals that prevent cell death, Calhoun says, "by sending out signals that say, 'Don't commit suicide—the cavalry's coming.'" He believes that it's the mixture of different cells, rather than just the stem cells, that are important for repair: "I think it's a group of cells that are reacting to the local environment, signaling back and forth, that help stimulate new blood vessels."

Cytori's system is already used in Europe, Japan, and elsewhere for cosmetic enhancement, wound healing, and breast reconstruction surgery. But for cardiac use, the company had to retool its "Celution" machine to create a solution safe enough to be injected directly into the coronary artery without clotting. The procedure is relatively straightforward. When a patient comes to the emergency room after a heart attack, a physician removes a fat sample and runs it through Cytori's machine, where enzymes break down the fat and free up the desired cells in the surrounding matrix. "We've taken a seven-to-eight-hour lab process, validated it, optimized it, and put it in a box. Then each patient gets his own sterile tissue right back," Calhoun says. An hour after the fat is harvested, surgeons place a stent into the patient's coronary artery and deliver the slurry of cells. 

The procedure has been tested in a 12-person pilot trial in Europe, and the results seem encouraging: after six months, patients who were given a solution of their own fat-derived stem cells within 12 hours of a heart attack had about half as much dead tissue as those who hadn't had the treatment. That translated into fewer irregular heartbeats, better oxygen delivery, and improved blood flow. (Another trial, for chronic heart failure, also shows potential, and the procedure has been approved in Europe.) 

Cytori began a large-scale trial this month and hopes to test the procedure on 360 patients. The company aims to start large-scale clinical trials on heart attack patients in the United States by 2014 and on patients with chronic heart failure even earlier than that.

"This cell type holds a lot of potential, and I think it could become a very important treatment for heart attack and [restricted blood flow]," says Keith March, director of the Indiana Center for Vascular Biology and Medicine at Indiana University in Indianapolis. "So the fact that they're starting this trial and have shown good evidence of feasibility and safety is very encouraging—we're anxious to see what it shows. That these cells can be obtained so readily and the procedure done in a point-of-care fashion means that this technology could be affordable and accessible even in areas of the world where highly technical surgery may not be possible."
But it's still early. Clinical trials of stem cells derived from bone marrow have shown mixed results in treating heart disease, and it's unclear whether fat-derived cells will fare better. "If it works, it would be wonderful to have a ready-made source of autologous stem cells," says Richard Schatz, research director of cardiovascular interventions at Scripps Health in San Diego, one of the inventors of the coronary stent. But he and others note that it will take many more trials to determine how effective Cytori's methods are compared with treatments based on marrow stem cells. 

By Lauren Gravitz
From Technology Review

An Even Smaller Pocket Projector

Researchers in Germany have developed the world's thinnest "pico" video projector. The prototype device contains an array of carefully shaped microlenses, each with its own miniature liquid-crystal display (LCD). The device is just six millimeters thick, but it produces images that are 10 times brighter than would normally be possible with such a small device.

 Light fantastic: This prototype pico projector produces a sharp image without needing a bulky lens, thanks to an ultrathin array of microlenses.

Handheld pico projectors can be used to display movies, maps, or presentations on nearby surfaces. But the projections can be difficult to view in direct sunlight because the light source isn't very powerful. The new lens system is small enough to be incorporated into a slim smart phone.

Increasing the brightness of a projection normally means increasing the area of the light source used, says Marcel Sieler, a researcher at the Fraunhofer Institute for Applied Optics and Precision Engineering in Germany. Sieler was involved with developing the prototype. But to increase the area in this way requires a thicker lens to focus the larger image. "As the area of the light source increases, so does the volume of the lens," says Sieler. The result is a much bigger projector.

Sieler and colleagues created a novel type of lens that focuses light from a relatively large light source while remaining thin. The prototype video projector consists of 45 microlenses colored red, green, or blue. Each lens has an LCD with 200 by 200 pixels behind it. The light passing through each LCD is focused through a lens, and together each image is superimposed on top of each other to produce the final image. The design was inspired by a type of microlens array known as a "fly's eye condenser," which is normally used to mix light from different sources.

The resolution of the projector is close to that of a WVGA projector, which has 800 by 480 pixels. The new projector has a brightness of 11 lumens, says Sieler, compared to 10 to 15 lumens for existing pico projectors. Sieler says that if the prototype were the same size as an existing pico projector, it would produce about 90 lumens. The next challenge is to make the LCD pixels smaller, from 8.5 microns each to less than three microns, says Sieler.

Microlens arrays are not new, notes Tim Holt, executive director of the Institute of Photonics at the University of Strathclyde. But Holt says that molding each lens in a way that focuses light at a single convergent point is novel.

The LCD behind each lens is sandwiched inside a transparent material, allowing the entire lens to be more compact. To make this kind of integration possible, the researchers needed a new transparent material. Glass wasn't suitable because its melting point is so high that the LCD components would be destroyed during the molding process, says Michael Popall, head of the Fraunhofer Institute for Silicate Research, who helped develop the new pico projector. Transparent polymers tend to have the opposite problem: their melting point is so low they would melt and deform under the projector's light source.

Popall and colleagues developed what he calls a hybrid organic-inorganic material. "This is an optical material that is totally transparent in the visual range, and which you can process like a polymer," he says. 

In February, the new prototype will be demonstrated at the Nano Tech 2011 conference in Tokyo.

By Duncan Graham-Rowe
From Technology Review

Automakers Show Interest in an Unusual Engine Design

An engine development company called the Scuderi Group recently announced progress in its effort to build an engine that can reduce fuel consumption by 25 to 36 percent compared to a conventional design. Such an improvement would be roughly equal to a 50 percent increase in fuel economy.

 $50 million engine: It took Scuderi Group most of the $65 million it’s raised so far to develop just one engine, the prototype shown here. It’s a split-cycle two-cylinder engine, in which one cylinder compresses air and the other combusts a fuel-air mixture.

Sal Scuderi, president of the Scuderi Group, which has raised $65 million since it was founded in 2002, says that nine major automotive companies have signed nondisclosure agreements that allow them access to detailed data about the engine. Scuderi says he is hopeful that at least one of the automakers will sign a licensing deal before the year is over. Historically, major automakers have been reluctant to license engine technology because they prefer to develop the engines themselves as the core technology of their products. But as pressure mounts to meet new fuel-economy regulations, automakers have become more interested in looking at outside technology.

Although Scuderi has built a prototype engine to demonstrate the basic design, the fuel savings figures are based not on the performance of the prototype but on computer simulations that compare the Scuderi engine to the conventional engine in a 2004 Chevrolet Cavalier, a vehicle for which extensive simulation data is publicly available, Scuderi says. Since 2004, automakers have introduced significant improvements to engines, but these generally improve fuel economy in the range of something like 20 percent, compared to the approximately 50 percent improvement the Scuderi simulations show. 

There's a big difference, however, between simulation results and data from engines in actual vehicles, says Larry Rinek, a senior consultant with Frost and Sullivan, an analyst firm. "So far things are looking encouraging—but will they really meet the lofty claims?" he says. Automakers should wait to see data from an actual engine installed in a vehicle before they license the technology, he says. 

A conventional engine uses a four stroke cycle: air is pulled into the chamber, the air is compressed, fuel is added and a spark ignites the mixture, and finally the combustion gases are forced out of the cylinder. In the Scuderi engine, known as a split-cycle engine, these functions are divided between two adjacent cylinders. One cylinder draws in air and compresses it. The compressed air moves through a tube into a second cylinder, where fuel is added and combustion occurs. 

Splitting these functions gives engineers flexibility in how they design and control the engine. In the case of the Scuderi engine, there are two main changes from what happens in a conventional internal-combustion engine. The first is a change to when combustion occurs as the piston moves up and down in the cylinder. The second is the addition of a compressed-air storage tank.

In most gasoline engines, combustion occurs as the piston approaches the top of the cylinder. In the Scuderi engine, it occurs after the piston starts moving down again. The advantage is that the position of the piston gives it better leverage on the crankshaft, which allows the car to accelerate more efficiently at low engine speeds, saving fuel. The challenge is that, as the piston moves down, the volume inside the combustion chamber rapidly increases and the pressure drops, making it difficult to build up enough pressure from combustion to drive the piston and move the car. 

The split-cycle design, however, allows for extremely fast combustion—three to four times faster than in conventional engines, Scuderi says—which increases pressure far faster than the volume expansion decreases it. He says that fast combustion is enabled by creating very high pressure air in the compression cylinder, and then releasing it into the combustion chamber at high velocities.

Having a separate air-compression cylinder makes it easy to divert compressed air into a storage tank, which can have a number of advantages.  For one thing, it's a way to address one problem with gasoline engines: they're particularly inefficient at low loads, such as when a car is cruising at moderate speeds along a level road. Under such conditions, the air intake in a conventional engine is partly closed to limit the amount of air that comes into the engine—"it's like sucking air in through a straw," Scuderi says, which makes the engine work harder. 

In the new engine design, rather than shutting down air flow, the air intake is kept wide open, "taking big gulps of air," he says.  The air that's not needed for combustion is stored in the air tank. Once the tank is full, the compression piston stops compressing air. It's allowed to move up and down freely, without any significant load being put on the engine, which saves fuel. The air tank then feeds compressed air into the combustion chamber. 

The air tank also provides a way to capture some of the energy from slowing down the car. As the car slows, the wheels drive the compression cylinder, filling up the air tank. The compressed air is then used for combustion as needed.

It is still far from clear whether the design can be a commercial success. Even if the simulation results translate into actual engine performance in a car, the engine may not prove to be easy and affordable to manufacture, Rinek says, especially with equipment in existing factories. The design will also have to compete with many other up-and-coming engine designs. Scuderi says the first application of the engine might not be in cars, but instead as a power generator, especially in applications where having compressed air on hand can be useful. For example, construction sites can require electricity for power saws and compressed air for nail guns. 

By Kevin Bullis
From Technology Review

Treating Genetic Disorders Before Birth

Physicians may one day be able to treat genetic blood diseases before a child is even born. In a study of mice that was published this week in the Journal of Clinical Investigation, researchers at the University of California, San Francisco, have found that transplanting a mother's own stem cells into her fetus populates its bone marrow with healthy cells while avoiding immune rejection. 

 Fortifying the fetus: By studying the immune systems of 14-day-old mouse embryos (shown here in an ultrasound), researchers have shown that genetic blood disorders might be treatable before birth.

If the findings hold true in humans, stem-cell transplants from mother to fetus could prime the fetus for a bone-marrow transplant from its mother—or a donor that is tissue-matched to the mother—after birth.
Diseases such as sickle cell anemia and beta thalassemia result from abnormal red blood cells and can be treated with bone-marrow transplants. But it's not always possible to find a match.   And standard bone-marrow transplants, even between tissue-matched donors, must be followed with a lengthy course of immunosuppressive drugs. 

Scientists theorize that bone-marrow transplants performed when a fetus is still developing would override this problem. They suspect that the fetus's immature immune system could be tricked into adopting those foreign cells and recognizing them as its own. "The fetus is wired to tolerate cells—when it encounters cells from mom, it tolerates them," says Tippi MacKenzie, the pediatric surgeon at UCSF who led the new research. 

Research in animals has shown the promise of that approach. But early tests in humans came up against a serious setback—the donor cells were being rejected and killed off before a fetus could assimilate them, and no one was quite sure why. "It's a conundrum," says MacKenzie. 

The blame, it seems, may be mom's. MacKenzie and her colleagues found that when they injected a fetus with hematopoietic stem cells (which populate bone marrow and give rise to blood cells) that were not matched to the mother or fetus, the infusion prompted an influx of maternal immune cells into the fetus.

"What we saw was that it's not the fetal immune system that's rejecting the cells—it's the mother's," MacKenzie says. "And if it's the mother's immune system that's rejecting them, we may be able to transplant maternal cells for some of these disorders and get them to engraft." Indeed, when researchers injected the fetus with stem cells from a donor that was tissue-matched to the mother, the cells happily took root in the fetus's bone marrow. 

"The critical question is how this applies to large animals and humans," says Alan Flake, a pediatric surgeon and the director of the Center for Fetal Research at the Children's Hospital of Philadelphia. Part of the issue with humans is that even when adult stem cells do survive in the fetus, it's difficult for them to compete with the resident fetal stem cells, which can proliferate much more rapidly. 

But Flake, who pioneered the fetal-stem-cell transplant treatment for severe combined immunodeficiency disease (SCID), or "bubble boy syndrome," says that although a single dose of maternal or maternally matched donor cells might not cure disease, it could prime the fetus's immune system into accepting a stem-cell transplant from the same person later in life.  (Fetal transplants work in SCID because the disease causes such a severe lack of immunity that a fetus can fully incorporate stem-cell transplants without competition from preëxisting immune cells.)

"This might allow us to perform in-utero transplantation to fetuses in a way that is more likely to work, and to do it in a way that could be safer to the fetus," says Joseph (Mike) McCune, a professor of experimental medicine at UCSF who was not involved in the study.  

The next step, MacKenzie says, will be to test the treatment in larger mammals and nonhuman primates. But for now, her lab is more focused on understanding precisely what's going on in the maternal-fetal immune system interactions. "We're trying to figure out the mechanism by which the mother cells are exerting their effect. And we're looking at the idea of immune-cell trafficking between mom and fetus—to what extent does it happen in human pregnancies?" 

By Lauren Gravitz
From Technology Review

The Amazing Carbon Fiber Loom Toyota Didn't Want You To See

Fifteen months ago, we tried to show you a video of the amazing carbon fiber loom Toyota invented to weave parts for the Lexus LFA. Lexus yanked that video, concerned competitors would steal the technology. Now it's back online.


The video below, part of a passel of commercials Lexus loosed on the internets earlier this month, highlights the dual-tube loom in action, which is used for the A-pillars on the LFA and which Toyota says will be used to build parts for future models. Given the Japanese automaker was originally founded as a loom builder in 1926, it may have a historical advantage if the auto industry's future includes more weaving or knitting.






New Study Finds No Sign of ‘First Habitable Exoplanet’

Things don’t look good for Gliese 581g, the first planet found orbiting in the habitable zone of another star. The first official challenge to the small, hospitable world looks in the exact same data — and finds no significant sign of the planet.



“For the time being, the world does not have data that’s good enough to claim the planet,” said astro-statistics expert Philip Gregoryof the University of British Columbia, author of the new study.

The “first habitable exoplanet” already has a checkered history. When it was announced last September, Gliese 581g was heralded as the first known planet that could harbor alien life. The planet orbits its dim parent star once every 36.6 days, placing it smack in the middle of the star’s habitable zone, the not-too-hot, not-too-cold region where liquid water could be stable.

Planet G was the sixth planet found circling Gliese 581, a red dwarf star 20 light-years from Earth. A team of astronomers from the Geneva Observatory in Switzerland found the first four planets using the HARPS spectrograph on a telescope in Chile. The team carefully measured the star’s subtle wobbles as the planets tugged it back and forth.

Two more planets, including the supposedly habitable 581g, appeared when astronomers Steve Vogt of the University of California, Santa Cruz and Paul Butler of the Carnegie Institution of Washington added data from the HIRES spectrograph on the Keck Telescope in Hawaii. They announced their discovery Sept. 29.
Just two weeks later, the HARPS team announced they found no trace of the planet in their data, even when they added two more years’ worth of observations. But it was still possible that the planet was only visible using both sets of data.

Now, the first re-analysis of the combined data from both telescopes is out, and the planet is still missing.
“I don’t find anything,” Gregory said. “My analysis does not want to lock on to anything around 36 days. I find there’s just no feature there.”

Unlike earlier studies, Gregory used a branch of statistics called Bayesian analysis. Classical methods are narrow, testing only a single hypothesis, but Bayesian methods can evaluate a whole set of scenarios and figure out which is the most likely.

Gregory wrote a program that analyzed the likelihood that a given planetary configuration would produce the observed astronomical data, then ran it for various possible configurations.

For the HARPS data set, he found that the best solution was a star with five planets, which orbit the star once every three, five, 13, 67 and 400 days. The 36-day habitable world wasn’t there.

When he looked at the HIRES and the combined data sets, the best solution was a star with two planets. Only when he included an extra term in the HIRES data did Gregory find more, which he suspects means the HIRES instrument isn’t as accurate as thought.

“There may be something in the telescope…that’s contributing to the error,” he said.
Gregory’s model finds the probability that the six-planet model is a false alarm is 99.9978 percent. None of the planets Gregory’s analysis turned up are in the habitable zone. The results are in a paper submitted to the Monthly Notices of the Royal Astronomical Society and published on the physics preprint website arxiv.org.

Other astronomers seem impressed with Gregory’s analysis.
“That’s the right way to do it,” said exoplanet expert Daniel Fabrycky of the University of California, Santa Cruz. “I think everyone would agree that that is the most sophisticated analysis that you can do, and as much as you could hope to do.”

“The Gregory paper is by far the most complete statistical analysis to date that has been made public,” said exoplanet and astro-statistics expert Eric Ford of the University of Florida. “It’s by far the most rigorous analysis.”

But most astronomers are not yet ready to close the book on Gliese 581g.
“I’m not going to admit that it’s a dead planet yet,” said exoplanet expert Sara Seager of MIT. “No one will be able to sort this out today … it will take some time.”

Vogt still firmly believes the planet is there. “I’m standing by our data,” he told Wired.com.
He said there are two ways to interpret the signals from Gliese 581. Sometimes a single planet with an elongated, or elliptical orbit can look the same as two planets that trace perfect circles around their stars. One of Gliese 581’s planets, planet D, could be one of these “eccentric impostors,” hiding an extra planet within its signal.

Part of the reason it’s so difficult to tell these two scenarios apart is that spotty observations make fake signals in the data. These signals, which show up because the telescope can’t watch the star continuously, look like they could actually be planets, but they would disappear if we could observe round the clock.
In a paper that’s still in preparation, astronomer Guillem Anglada-Escudé and Harvard graduate student Rebekah Dawson tackle these issues, and conclude that the habitable planet still has a chance. “With the data we have, the most likely explanation is that this planet is still there,” Anglada-Escudé said.

Everyone agrees that the problem can only be resolved with more data. In particular, astronomers are anxious to see the extra data that the HARPS group used to conclude Gliese 581g is a mirage.

“I don’t think anything will change significantly until the Swiss publish their data,” Anglada-Escudé said. “Nobody else has seen their data. We’re waiting to see that, just to settle down the problem.”

By Lisa Grossman 
From wired.com

New Magnets Could Solve Our Rare-Earth Problems

Stronger, lighter magnets could enter the market in the next few years, making more efficient car engines and wind turbines possible. Researchers need the new materials because today's best magnets use rare-earth metals, whose supply is becoming unreliable even as demand grows. 

 Nano attraction: These flake-shaped nanoparticles are made up of the magnet material neodymium-iron-boron.

So researchers are now working on new types of nanostructured magnets that would use smaller amounts of rare-earth metals than standard magnets. Many hurdles remain, but GE Global Research hopes to demonstrate new magnet materials within the next two years.

The strongest magnets rely on an alloy of the rare-earth metal neodymium that also includes iron and boron. Magnet makers sometimes add other rare-earth metals, including dysprosium and terbium, to these magnets to improve their properties. Supplies of all three of these rare earths are at risk because of increasing demand and the possibility that China, which produces most of them, will restrict exports.

However, it's not clear if the new magnets will get to market before the demand for rare-earth metals exceeds the supply. The U.S. Department of Energy projects that worldwide production of neodymium oxide, a key ingredient in magnets, will total 30,657 tons in 2015. In one of the DOE's projected scenarios, demand for that metal will be a bit higher than that number in 2015. The DOE's scenarios involve some guesswork, but the most conservative estimate has demand for neodymium exceeding supply by about 2020.

"A lot of the story about rare earths has focused around China and mining," says Steven Declos, manager of material sustainability at GE Global Research. "We believe technology can play a role in addressing this." The DOE is funding GE's magnet project, and one led by researchers at the University of Delaware, through the Advanced Research Projects Agency-Energy (ARPA-E) program, which fosters research into disruptive technology.

Coming up with new magnet materials is not easy, says George Hadjipanayis, chair of the physics and astronomy department at the University of Delaware. Hadjipanayis was involved in the development of neodymium magnets in the 1980s while working at Kollmorgen.  "At that time, maybe we all got lucky," he says of the initial development of neodymium magnets. The way researchers made new magnets in the past was to crystallize alloys and look for new forms with better properties. This approach won't work going forward. "Neodymium magnet performance has plateaued," says Frank Johnson, who heads GE's magnet research program. Hadjipanayis agrees. "The hope now is nanocomposites," he says. 

Nanocomposite magnet materials are made up of nanoparticles of the metals that are found in today's magnetic alloys. These composites have, for example, neodymium-based nanoparticles mixed with iron-based nanoparticles. These nanostructured regions in the magnet interact in a way that leads to greater magnetic properties than those found in conventional magnetic alloys. 

The advantage of nanocomposites for magnets is twofold: nanocomposites promise to be stronger than other magnets of similar weight, and they should use less rare-earth metals. What enables better magnetic properties in these nanocomposites is a property called exchange coupling. The physics are complex, but coupling between different nanoparticles in the composite leads to overall magnetic properties that are greater than the sum of the parts. 

Exchange coupling can't happen in pure magnet materials, but emerges in composites made of mixtures of nanoparticles of the same metals that are used to make conventional magnets. "The advantage of stronger magnets is that the machines you put them in can be smaller and lighter," says Johnson.

GE would not disclose which materials it's using to make the magnets, or what its manufacturing methods would be, but Johnson says the company will rely on techniques it has developed to work with other metals. The main problem the company faces, says Johnson, is scaling up production to make large magnets—so far it's only been possible to make thin films of the nanocomposites. The company has about $2.25 million in funding from ARPA-E.

Hadjipanayis reports his group, a multi-institute consortium, has  received nearly $4.5 million in ARPA-E funding. It's possible to make the necessary nanoparticles in small quantities in the lab, but scaling up will be difficult. "They're very reactive materials," he says. 

The group is experimenting with a wide range of different types of nanoparticles, including combinations of neodymium-based nanoparticles with iron-cobalt nanoparticles. Another challenge is assembling the nanoparticles in a mixture that ensures they have enough contact with each other to get exchange coupling. "It's one step at a time," says Hadjipanayis.

By Katherine Bourzac
From Technology Review

The Healing Power of Light

A new polymer material that can repeatedly heal itself at room temperature when exposed to ultraviolet light presents the tantalizing possibility of products that can repair themselves when damaged. Possibilities include self-healing medical implants, cars, or even airplane parts.

 No glue required: Broken polymer chains reform to repair a crack in this material when it is pressed together and exposed to UV light.

The polymer, created by researchers at Carnegie Mellon University and Kyushu University, heals when a crack in the material is pressed together and exposed to UV light. The same treatment can cause separate chunks of the material to fuse together to form one solid piece.

The researchers were able to cut the same block into pieces and put them back together at least five times. With further refinement, the material could mend itself many more times, says CMU chemistry professor Krzysztof Matyjaszewski, who led the research team.

Currently, the polymer can only repair itself in an oxygen-free environment, so the researchers had to carry out the UV treatment in the presence of pure nitrogen. But they hope to develop polymers that heal under visible light and don't require nitrogen, which should open up many practical applications, including products and components that heal after suffering minor damage. Such a material, Matyjaszewski says, "would be a dramatic improvement over what we've already done."

Self-healing materials have been made before, mainly polymers and composites. But most of those have relied on tiny capsules that are filled with a healing agent. When the polymer cracks, the capsules break open and release the healing agent, which becomes polymer solid and seals the crack and restores the material's properties. But once the capsules are depleted, the material can no longer mend itself. 

The new polymer relies on carbon-sulfur bonds within the material. "There are thousands of chemical bonds here, and even if you lose a small percent, one can think about potentially repeating the healing a hundred times," Matyjaszewski says.

The researchers found that even shredded bits of the polymer will join together to form a continuous piece when irradiated with UV light. This implies that the material could also be easy to recycle. The researchers presented the details of their experiments in a paper published online in Angewandte Chemie.

Some research groups, including Matyjaszewski's, have made polymers that heal when exposed to heat or certain chemicals. But Michael Kessler, a materials science and engineering professor at Iowa State University, says light healing is a superior option. "I think that UV stimulus is particularly appealing as an external stimulus because it's noncontact, it happens at room temperature, it's pretty easy to acquire and handle, and, importantly, it's limited to target areas where the damage occurs," Kessler says.

Kessler adds, however, that the new material suffers from two of the main drawbacks faced by other self-healing materials: it requires pressure, and the repair process takes hours. 

Nonetheless, some self-healing materials are on their way to commercialization. Autonomic Materials in Champaign, Illinois, is readying corrosion- and scratch-resistant coatings containing microcapsules developed by Scott White, a professor at the Beckman Institute at the University of Illinois at Urbana-Champaign. 

White's colleague Nancy Sottos  has made materials that mimic human skin and that heal themselves using underlying channels filled with healing agents. Sottos envisions the materials being used for structural applications such as airplane parts, car and spacecraft components, and for everyday products such as cell-phone and laptop cases. 

UV-triggered healing won't be suitable for all applications, says Sottos. That's because the restructuring of carbon-sulfur bonds that allows the material to heal also requires that material to be rubbery and soft. 

"You can make materials that are harder or softer," says Matyjaszewski. "Every self-healing material is somehow unique and has advantages over the other ones. It depends on the properties and area of application." 

By Prachi Patel
From Technology Review

Steering Pills Safely

Controlling a pill as it moves through a patient's body could let doctors deliver drugs to precisely the right location—for instance, a tumor in the colon. 

 Swallow this: A magnet within the capsule shown here lets researchers control the capsule’s position within the digestive tract. This could lead to more precise drug-delivery methods.

Researchers at Brown University have demonstrated a way to safely control a maneuverable pill and watch as it passes through the gastrointestinal tract. Aside from looking for traces in a patient's bloodstream, it is hard for doctors to tell if a drug has been delivered properly. "You can take X-rays [of tagged pills], but you can never really know what time the drug was released," says Edith Mathiowitz, an associate professor of medical science and engineering at Brown University and principal investigator on the project. Mathiowitz's team, which focuses on creating better drug-delivery technologies, originally built a magnetic tracking system to observe pills as they pass through the body. But the researchers soon realized they could control the pills as well.

"What we developed could be extremely useful for improving bioavailability for drugs that have even narrow therapeutic windows," says Mathiowitz, referring to drugs that are absorbed only in specific regions of the gastrointestinal tract. "You can use it in two ways: a retention system for the stomach [to ensure the patient receives the desired dosage] and for localization in specific regions of the gastrointestinal tract—regions that are very hard to reach."

In experiments, Mathiowitz's team moved pills through the stomachs and intestines of rats. They developed a system to measure and control the force between a one-millimeter-long magnet within the pill and a large external magnet used to control its movement. The system automatically moves the external magnet closer to or farther from the pill's magnet to maintain the minimum amount of force that will manipulate the pill, avoiding harm to the animal's intestine or stomach. Other researcher groups have shown that capsules can be manipulated inside the body magnetically, but they have not focused on minimizing the force used, explains Bryan Laulicht, first author of the research paper describing the work, published today in the online edition of the journal Proceedings of the National Academy of Sciences. "The prevailing thought was to use as much force as possible," he says. "Our real push was to emphasize the safety."

Pill tracking: A new system carefully controls the position of a magnetic pill (circled in white) inside a rat’s digestive tract.


The pill developed at Brown is about the same size and shape as a Tylenol capsule and contains a magnet and a reservoir to hold drugs and microscopic iron particles, which show up on the X-ray.

In the animal experiments, the controlling magnet was placed next to the rat, along with a device to measure the force between the two magnets—a small cantilever that bends in response to force and is sensitive enough to detect just .01 gram of mass. The changes in the cantilever beam were fed into a computer 10 times a second, and the controlling magnet moved automatically in response. The team inferred the minimum force needed from calculations of the normal pressures that occur during digestion and was able to keep the magnetic pill in the rat's small intestines for 12 hours. The team used an X-ray machine to track the pill in the rat.

"I think this is a good way for a more controlled drug-delivery system," says Frank Volke, head of a research team at the Fraunhofer Institute for Biomedical Engineering in ankt Ingbert, Germany that is developing similar technology. Volke's group created a pill containing a camera that can be controlled magnetically, and his group hopes to develop a drug-delivery system too. Other experimental approaches to controlling the movement of pills include using tiny robotic feet and using modules that self-assemble inside the body.

Slow-release pills, coated with a chemical that controls the rate at which the drug is dispensed, have been around for a while, but a way to steer a pill "would add more flexibility to that type of medication," says Maysam Ghovanloo, an assistant professor at the Georgia Institute of Technology, who is developing a smart pill that monitors drug compliance. He suggests that monitoring magnetic fields, rather than using X-ray, might ultimately make it safer to monitor the pill's position. Laulicht says they plan to eventually transition to a magnetic tracking system.

Moses Goddard, a general surgeon and associate professor at Brown University who was not involved in the work, says it is "an intriguing refinement of tools for improving magnetic guidance techniques." While there are no FDA-approved magnetic guidance technologies for use with drugs, Goddard says that such an approach could help treat diseases ranging from diabetes to Crohn's disease. The magnetic study will "give us a much better handle on how to use and manipulate magnetic forces safely and effectively in order to guide pills to areas of the bowel where we want them to go and remain for a controllable period of time," says Goddard. "It will be particularly useful for figuring out how to guide relatively large pills to specific areas of interest."

Metin Sitti, associate professor of engineering at Carnegie Mellon University who works on robotic pills, says the research "is very promising in the sense of medical applications of untethered magnetic capsules."
It will take some time before such a technology will be safe to use in people. What's more, diet and external surroundings would need to be carefully controlled to make sure that no unexpected magnetic forces come into play. But Laulicht says the system could potentially recognize whether another magnet was changing the force applied to a pill, and even counter it. "Ultimately, I do think this could be used in an outpatient setting," he says.

If the device is eventually adapted and approved for human use, it would probably be used only in extreme cases, such as gastrointestinal cancer or inflammatory bowel disease for which other therapies have failed, says Laulicht. The team's next steps are to try using it with real drugs, and to test it in larger animals.

By Kristina Grifantini
From Technology Review

Researchers Use MRI to Predict Your Gaming Prowess


How can you tell if you’re a natural gaming pro? Researchers say they need look no further than your basal ganglia.

Psychology professors at the University of Illinois at Urbana-Champaign said Thursday that they can now predict with what they call “unprecedented accuracy” a person’s skills at videogames and other complex tasks by first studying certain areas of their brains. The study, “Predicting Individual’s Learning Success From Patterns of Pre-Learning MRI Activity,” will be published in online journal PLoS One.

“Our data suggest that some persistent physiological and or neuroanatomical difference is actually the predictor of learning,” said University of Illinois psychology professor and research leader Art Kramer in a statement.

The researchers first found subjects who had not previously spent much time playing videogames. Then, they imaged their brains with MRI scans before having them play a videogame called Space Fortress, developed by the university. 

This was the game used in last year’s study, by some of the same researchers, that first showed the correlation between brain size and game aptitude. 

At that time, the research showed that “nearly a quarter” of the difference in performance among players could be predicted by the size of brain parts like the nucleus accumbens and putamen. Today, with more-refined techniques, the scientists say that number is between 55 percent and 68 percent.

“We find variations among participants in the patterns of brain activity in their basal ganglia,” Dirk Bernhardt-Walther, an Ohio State University psychology professor who led the design of the experiment, said in a statement Thursday. “Powerful statistical algorithms allow us to connect these patterns to individual learning success.”

By Chris Kohler 
From wired.com

Nanolasers Heat Up

Researchers have cleared a major hurdle to the practical use of nanoscale lasers, opening the way to fundamentally new capabilities in biosensing, computing, and optical communications. A team at the University of California, Berkeley, has demonstrated the first semiconductor plasmon nanolaser, or "spaser," that can operate at room temperature.

 Hot spot: Blue light (top) emitted from the first semiconductor “spaser” (bottom) that runs at room temperature, instead of in a cryogenic vacuum. This special type of nanolaser amplifies particles called surface plasmons, which can be confined in smaller spaces than conventional light.

While traditional lasers work by amplifying light, spasers amplify particles called surface plasmons, which can do things that the photons in ordinary light waves can't. For instance, photons can't be confined to areas with dimensions much smaller than half their wavelength, or about 250 nanometers, limiting the extent to which optical devices can be miniaturized. Plasmons, however, can be confined in much smaller spaces and then converted into conventional light waves—making them useful for ultra-high-resolution imaging or miniaturized optical circuits that might, for example, operate 100 times faster than today's fastest electronic circuits. 

Working with Berkeley mechanical-engineering professor Xiang Zhang, postdocs Ren-Min Ma and Rupert Oulton designed and demonstrated the new semiconductor spaser. It uses metals and semiconductors, long recognized to be attractive materials because of their ubiquity and resilience. But previous spasers made of them lost too much energy to sustain lasing unless cooled to extremely low temperatures, below -250 °C. 

"For a time there was a lot of criticism that plasmon lasers would only work at low temps," says Martin Hill, a professor of electrical engineering at the Technical University of Eindhoven, in the Netherlands, who researches nanolasers. "This [is] an interesting demonstration and a step towards making useful devices and encouraging more people to look at plasmon-mode nanolasers."

The team's device contains a 45-nanometer-thick, 1-micrometer square of cadmium sulfide, a semiconductor used in some solar cells and photoresistors for microchip manufacturing. The square rests on a a 5-nanometer slice of magnesium fluoride, atop a sheet of silver. When light from a commercial laser hits the metal, plasmons are generated on its surface. But the cadmium sulfide square confines the plasmons to the gap, reflecting them back each time they hit an edge. Less than 5 percent of the radiation escapes the structure, allowing sustained surface-plasmon lasing, or "spasing," at room temperature. The research was published online in Nature Materials on December 19.

This isn't the first spaser to work at room temperature. In fact, the very first spaser used dye-based materials that work at room temperature. But these materials can only be activated with pulses of light—called optical pumping—which limits applications. The Berkeley team used optical pumping to demonstrate its laser because "it's easier," says Oulton, but the major advantage of semiconductor lasers is that they can be pumped electrically—the team's ultimate goal. "We need to be able to plug real-world devices into a wall socket. This is without question," Oulton says.

While Hill is excited by the Zhang group's demonstration, he notes that "electrically pumped devices are a much more technically difficult thing. For example, for photonic crystal lasers, it took many years from the first optically pumped laser until an electrically pumped device was made."

The Nature Materials paper describes only sustained lasing within the cadmium sulfide cavity, which Ma says is useful for applications like single-molecule detection, important in high-sensitivity biological and medical testing. The researchers are working on demonstrating a biosensor based on the laser, and Ma says a practical device might be possible within a few years. They have also developed ways to couple the light output of the spaser so that it can be used in plasmonic circuits for optical computing or communications. Building simple plasmonic circuits is another project Zhang's group is pursuing.

Other possible applications for the spaser include using it to focus light beams in photolithography, making possible the manufacture of microchips with features smaller than 20 nanometers, about the limits of optical lasers. It could also be useful for packing more data onto storage media such as DVDs and hard disks. Ma notes that both applications would require the addition of a plasmonic lens to further focus the light; this is something else that Zhang's lab has worked on.

The group is enthusiastic about the potential to eventually commercialize this design, since it uses inorganic semiconductors already common in computing and communications. Ma says it should be "very easy" to integrate devices based on the design into current fabrication processes. Oulton and Hill both also mention that the materials are extremely robust and have long lifetimes inside devices. 

Optimistically, say Ma and Oulton, proof of principle—electrically injected plasmonic lasers that run at room temperature—should be possible within a couple of years, and commercial devices could follow quickly.

By Erika Jonietz
From Technology Review

Fingerprints Go the Distance

Over the years, fingerprinting has evolved from an inky mess to pressing fingers on sensor screens to even a few touch-free systems that work at a short distance. Now a company has developed a prototype of a device that can scan fingerprints from up to two meters away, an approach that could prove especially useful at security checkpoints in places like Iraq and Afghanistan.

 Prints from afar: The AIRprint biometric sensor can scan fingerprints from a distance of two meters.

The device, called AIRprint, is being developed by Advanced Optical Systems (AOS). It detects fingerprints by shining polarized light onto a person's hand and analyzing the reflection using two cameras configured to detect different polarizations.

Joel Burcham, director for projects at the Huntsville, Alabama-based company, says AIRprint could help make authorization more efficient in lots of settings. Instead of punching a keypad code or pressing fingers to a scanner, individuals could simply hold up a hand and walk toward a security door while the device checks their identity. "We're looking at places where the standard methods are a hassle," says Burcham. For instance, AIRprint could be linked to a timecard system, he says, to help avoid a logjam at manufacturing plants at the start or end of the workday.

Slightly smaller than a square tissue box, AIRprint houses two 1.3 megapixel cameras and a source of polarized light. One camera receives horizontally polarized light, while the other receives vertically polarized light. When light hits a finger, the ridges of the fingerprint reflect one polarization of light, while the valleys reflect another. "That's where the real kicker is, because if you look at an image without any polarization, you can kind of see fingerprints, but not really well," says Burcham. By separating the vertical and the horizontal polarization, the device can overlap those images to produce an accurate fingerprint, which is fed to a computer for verification.  The prototype device, which scans a print in 0.1 seconds and processes it in about four seconds, can handle only one finger at a time. Also, the scanned finger must remain at a fixed distance from the device. But by April, Burcham expects to have made significant improvements. By then, he says, the device should be able to scan five fingers at once even if a person is moving toward or away from the cameras, and the processing time ought to have dropped to less than a second.

Burcham says several potential customers have indicated that a single-finger scanner would be sufficient for their needs—so AOS plans to sell both a single-finger device and a more expensive five-finger device. "We're looking at having product ready for market at the beginning of the third quarter this year," says Burcham.

The military has a growing interest in biometric sensors that operate at a distance. The U.S. Department of Defense awarded $1.5 million to Carnegie Mellon's CyLab Biometrics Lab to support development of technology that performs iris detection at 13 meters. 

One potential customer for the AIRprint is the Marine Corps. Jeremy Powell, head of identity operations at Marine Corp Headquarters, saw a demonstration of it about a year ago. Currently, individuals entering a military installation must place their fingers on a scanner, with a Marine standing beside them to help ensure a viable print. Powell would prefer there to be a safe distance between the Marine and the person being scanned. The AIRprint device could be on a tripod and connected to a cable that runs behind a blast wall, where the Marine could safely assess the fingerprint result, he says.

AIRprint's two-meter standoff distance represents more than a technical advancement. "It is a step closer to being able to verify an individual's identity from a safe distance with or without their knowledge. As with all new technology, the hope is further advancements will follow and increase the standoff distance," says Powell. "This could potentially allow Marines to positively identify a target before engaging or conduct 'standoff' screenings from the safety of an armored vehicle."

Over the past nine years, the Marines have made increasing use of biometrics to distinguish friend from foe in Iraq and Afghanistan. Says Powell, "It's actually been very successful so far, and technologies like AIRprint have the potential to make it even more so." 


By Sandra Swanson
From Technology Review

Wired Science News for Your Neurons Previous post Synthetic Genes Sub for Natural Ones in Microbe Experiment


A handful of bacterial genes crucial to survival were successfully replaced by artificial ones in a new synthetic-biology experiment.

It’s not clear how the synthetic genes rescued the doomed E. coli bacteria, which had several important sequences of DNA knocked out of its genome. But scientists think synthetic proteins produced by the new genes replaced the missing natural versions.

“To enable life you need genes and proteins, which are information and machines,” said molecular biologist Michael Hecht of Princeton University, co-author of the study published online in PLoS ONE. “These evolved over a very long period of time, but we wanted to ask, ‘Are they really special, or can we make stuff like them from scratch?’ It seems we can.”

One of synthetic biology’s primary goals is to create customizable organisms able to produce food, fuel or pharmaceuticals; clean toxins from the environment; or even function as computers.

Most synthetic-biology experiments, including the J. Craig Venter Institute’s recent creation of a synthetic life form, rely on existing genes in nature. In the new experiment, however, Hecht and his team engineered a semi-random library of 1.5 million made-from-scratch genes.

Genes contain instructions for building proteins, which are made of units called amino acids. There are 20 different amino acids a protein can be made of, and some sequencesmake a protein fold into three dimensions. Each gene in Hecht’s synthetic library calls for proteins made of 102 amino acids. Yet, instead of randomly filling each spot, the genes included sequences prompting them fold into four-helix structures (right).
“Folding in three dimensions [is required for functionality] when it comes to proteins, so the library isn’t completely randomized,” Hecht said. “You might think of it as a targeted shotgun of randomness instead of a bomb of it.”


Twenty-seven strains of E. coli, each missing one gene critical for survival, individually mingled with the synthetic-gene library. Four strains of bacteria incorporated a synthetic gene into their DNA and grew on Petri dishes that contained only the bare nutrients for survival. Without the new genes, the four strains didn’t grow at all.

Taking the rescue of doomed microbes further, Hecht’s team made a strain of E. coli missing all four genes from the previous experiments. When the synthetic replacement genes from the library were added, the microbe was rescued.

“I think this is a very interesting start to some more research,” said biotechnologist Andrew Ellington of the University of Texas at Austin, who was not involved with the experiments. “I’d like to see more proof that these proteins are doing what [Hecht] says they’re doing…. There may be some weird things going on.”
Hecht said “it would be nice” to untangle the biochemistry of his genetic rescues, adding that the synthetic genes weren’t exactly optimum replacements for nature’s versions that were chiseled over billions of years of evolution. But he said that’s not the biggest takeaway from the experiment.

“We know which specific genetic sequences rescue the strains, even if we don’t yet know how they work,” Hecht said.

In addition to following up on the biochemistry of the genes that revived the bacteria, Hecht’s laboratory plans to engineer more-complex libraries and knock out even-more-crucial genes.
“‘How far can you go with this?’ is what we want to know. Could you knock out 100 genes and rescue all of them? Eventually a whole genome?” Hecht said.

By Dave Mosher 
From wired.com

Cancer in a Single Catastrophe: Chromosome Crisis Common in Cancer Causation

The scars of this chromosomal crisis are seen in cases from across all the common cancer types, accounting for at least one in forty of all cancers. The phenomenon is particularly common in bone cancers, where the distinctively ravaged genome is seen in up to one in four cases.

 Evidence of chromothripsis in small cell lung cancer.

The team looked at structural changes in the genomes of cancer samples using advanced DNA sequencing technologies. In some cases, they found dramatic structural changes affecting highly localised regions of one or a handful of chromosomes that could not be explained using standard models of DNA damage.

"The results astounded us," says Dr Peter Campbell, from the Cancer Genome Project at the Wellcome Trust Sanger Institute and senior author on the paper. "It seems that in a single cell in a single event, one or more chromosomes basically explode -- literally into hundreds of fragments.

"In some instances -- the cancer cases -- our DNA repair machinery tries to stick the chromosomes back together but gets it disastrously wrong. Out of the hundreds of mutations that result, several promote the development of cancer."

Cancer is typically viewed as a gradual evolution, taking years to accumulate the multiple mutations required to drive the cancer's aggressive growth. Many cancers go through phases of abnormal tissue growth before eventually developing into malignant tumours.

The new results add an important new insight, a new process that must be included in our consideration of cancer genome biology. In some cancers, a chromosomal crisis can generate multiple cancer-causing mutations in a single event.

"We suspect catastrophes such as this might happen occasionally in the cells of our body," says Dr Andy Futreal, Head of Cancer Genetics and Genomics at the Wellcome Trust Sanger Institute and an author on the paper. "The cells have to make a decision -- to repair or to give up the ghost.

"Most often, the cell gives up, but sometimes the repair machinery sticks bits of genome back together as best it can. This produces a fractured genome riddled with mutations which may well have taken a considerable leap along the road to cancer."
The new genome explosions caused 239 rearrangements on a single chromosome in one case of colorectal cancer.

The damage was particularly common in bone cancers, where it affected five of twenty samples. In one of these samples the team found three cancer genes that they believe were mutated in a single event: all three are genes that normally suppress cancer development and when deleted or mutated can lead to increased cancer development.

"The evidence suggests that a single cellular crisis shatters a chromosome or chromosomes," says Professor Mike Stratton, Director of the Wellcome Trust Sanger Institute and an author on the paper, "and that the DNA repair machinery pastes them back together in a highly erroneous order.

"It is remarkable that, not only can a cell survive this crisis, it can emerge with a genomic landscape that confers a selective advantage on the clone, promoting evolution towards cancer."

The team propose two possible causes of the damage they see. First, they suggest it might occur during cell division, when chromosomes are packed into a condensed form. Ionizing radiation can cause breaks like those seen. The second proposition is that attrition of telomeres -- the specialized genome sequences at the tips of chromosomes -- causes genome instability at cell division.

From sciencedaily.com