Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........


What a Lunar Olympic Stadium Looks Like







I can't imagine how sports would be in the the moon. I guess it'd be slow motion 100-meters or flying gladiators with lasers. Either way, I want to watch and this stadium on the Moon looks like the perfect place. Placed inside a half-kilometer crater, the Stadium of International Lunar Olympics would be round, with space for 100,000 spectators. It would use digital lighting to project field markers. On the tower, there would be a huge hotel, restaurants and a Jeff Bridges-lookalike in space suit, watching people killing each other.

What If Your Entire Desk Were a Touchscreen?


Big touch surfaces are nothing new, but we like the approach taken here. A familiar form factor—the traditional sitting desk—mixed with the (now) ubiquitous tech of touchscreens. Is the BendDesk what your office will look like someday? The prototype doesn't use the most sophisticated guts—relying on cameras and clunky projectors instead of an actual capacitive touch surface—but looks pretty snappy from the video demo. It may be chunky, but the results are slick. As well, the bottom part of the BendDesk can be used as—gasp!—an actual desk. Which is pretty great, really, as it would free up room for low-tech work, with plenty of screen real estate left over for pinching and pushing digital stuff. Right now the BendDesk is confined to the labs of Germany's RWTH Aachen University, but we hope this kind of clever design slinks its way out of academia. 






From gizmodo.com

Computers Get Help from the Human Brain

Most brain-computer interfaces are designed to help disabled people communicate or move around. A new project is using this type of interface to help computers perform tasks they can't manage on their own. In experiments, researchers used the interface to sort through satellite images for surface-to-air missiles faster than any machine or human analyst could manage alone.

 Search me: Researchers at Columbia University use signals from an electroencephalogram (EEG) device (worn) to help search rapidly through images.

"With Google, you have to type in words to describe what you're interested in," says Paul Sajda, an associate professor at Columbia University. "But let's say I'm interested in something 'funny looking.' " 

Sajda explains that computers struggle to classify images according to this kind of abstract concept, but humans can do it almost instantly. Electrical signals within the brain fire before a person even realizes he's recognized an image as odd or unusual. 

Sajda's device, called C3Vision (cortically coupled computer vision), uses an electroencephalogram (EEG) cap to monitor brain activity as the person wearing it is shown about 10 images per second. Machine-learning algorithms trained to detect the neurological signals that signify interest in an image are used to analyze this brain activity. By monitoring these signals, the system rapidly ranks the images in terms of how interesting they appear to the viewer. The search is then refined by retrieving other images that are similar to those with the highest rank. "It's a search tool that allows you to find images that are very similar to those that have grabbed your attention," says Sajda.

At the speed at which it works, the conscious brain is unable to register a "hit." But the neurological visual pathways work much faster, says Sajda. The brain produces distinct electrical signals that can be detected and decoded by the 64 EEG electrodes within the cap. "It's on the edge of the subconscious," he says.

Most brain-computer interface research is focused on harnessing conscious processes, says Eric Leuthardt, director of the Center for Innovation in Neuroscience and Technology at Washington University School of Medicine. "Reading our brain signals and being able to distinguish 'interesting' from 'not interesting' prior to us having a conscious perception of seeing the item tells us that there is a substantial amount of processing that our brain does prior to the conscious awareness of the perception."

Andrew Blake, a computer vision expert and managing director of Microsoft Research Cambridge, in the U.K., says that "controlling machines directly from brain activity is a subject of intense research interest, but it is very difficult to obtain precise control, particularly without invasive methods." 

Sajda calls the approach "information triage" because it uses limited information from the brain to help refine an image search. "The key is, we don't show the whole database. We take a small sample and show it very rapidly," says Sajda. "From 10,000 images, we may show just 100 or so." 

This process can deliver any images that grab the subject's attention. "One of the cool things about the idea is, if you see something new you didn't expect, and it grabs your attention, then this will also get a relatively high score," he says. 

Sajda and colleagues at Columbia have founded a spinoff company called Neuromatters to commercialize the technology with $4.6 million in funding from the Defense Advanced Research Projects Agency. Along with military applications, Sajda says possible applications might include advanced gaming interfaces and neuro-marketing. "It could be used for getting demographic feedback on how much an advert grabs people's attention," he says. 

By Duncan Graham-Rowe
From Technology Review

Defeating Drug-Resistant Cancers

Last August, oncologist Keith Flaherty and colleagues at Massachusetts General Hospital published a study that gave hope to patients with metastatic melanoma. But the good news was tempered by a serious caveat: in most patients, the drug eventually stopped working after anywhere from months to years. 


This issue of drug resistance has plagued the new generation of so-called targeted cancer therapies, designed to block the effects of genetic mutations that drive the growth of cancer. In two new studies published last week in Nature, researchers from Dana Farber Cancer Institute in Boston and the University of California, Los Angeles, uncovered how some melanoma tumors fight back against these drugs. They say the insight will aid in the design of new drugs and drug combinations that will allow targeted therapies to work longer and maybe even overcome resistance altogether. 

"If we can understand and anticipate the full spectrum of ways cancers can get around these drugs, we can come up with formulas for combinations of drugs that could have lasting control," says Levi Garraway, an oncologist and scientist at Dana Farber.

In one study, Garraway, Flaherty, and collaborators analyzed the effects of 600 different protein kinases, which are types of enzymes, on melanoma tumor cells growing in a dish. They found that overactivity among nine of the protein kinases made the cells resistant the type of drug that was so promising in Flaherty's melanoma study. One enzyme had never previously been implicated in cancer. The researchers confirmed the findings by analyzing tissue samples from melanoma patients who evolved resistance to the drug. 

It's not yet clear how common this particular mechanism of drug resistance is. But Flaherty says that, based on the findings, he is very optimistic about targeted therapies. "It's not chaos that creates resistance, it's the same rational cell and molecular biology that led to the development of these therapies in first place," he says. "We don't need to invoke some phenomenally complex network biology to figure this out."

In a related paper in Nature, Roger Lo, a physician and scientist at UCLA's Jonsson Comprehensive Cancer Center, found changes similar to those in Garraway's study. Lo agrees that the results will help scientists figure out more effective drug combinations. He likened the approach to those used to eradicate stubborn viruses. "A cocktail would be designed to cut off any possible escape route," he says. However, "it's more daunting to cover all grounds for a cancer cell," because such cells tend to be very "plastic," or capable of change.

Lo also cautions that researchers have studied relatively few patients, so it's not yet clear how broadly these findings will apply to larger numbers of patients. (One problem is that it's hard to come by tissue samples—researchers need tissue from the same patient both before and after treatment.) The researchers found resistance mechanisms in about 40 percent of the drug-resistant patients they studied, and are now looking for explanations for the remaining 60 percent. 

By Emily Singer
From Technology Review

Ultrasound Gets More Portable

Two years ago, computer engineers at Washington University in St. Louis created a prototype that took ultrasound imaging to a new level of mobility and connectivity—they connected an ultrasound probe to a smart phone. Now a startup awaiting clearance from the U.S. Food and Drug Administration hopes to begin selling the device next year.

 Phone, meet probe: An image of a fetus at 23 weeks is displayed on Mobisante’s phone-based ultrasound device. The probe connects to the device through a USB port.

Such a device would be useful for emergency responders, who could scan an injured person to detect internal bleeding or other trauma, and then immediately send an image to the hospital so physicians could be better prepared for the patient's arrival. Or a nurse practitioner visiting a pregnant woman's home could ask a specialist stationed elsewhere to weigh in on anomalies in the scan.

The company, Mobisante, was cofounded by David Zar (one of the prototype's developers) and Sailesh Chutani, formerly the head of external research at Microsoft. While at Microsoft, Chutani's group provided a mobile-health technology grant to Zar and his colleague, allowing them to design their smart-phone ultrasound prototype.

That Microsoft backing does not extend to Mobisante, though. The startup, which is based in Redmond, Washington, is in talks with venture capital investors, but so far it's been 40 percent self-funded and 60 percent funded by potential customers, such as community clinics, says Chutani, who declined to reveal the amount raised. (Many community clinics don't have the budget for a standard ultrasound machine, which can cost well above $50,000.)

Mobisante hasn't finalized the price for its device yet, but Chutani plans to sell several versions of it, with probes at different frequencies for different medical applications. Depending on the components included, the price could range from $5,000 to $10,000 initially and drop in half within the next three years, Mobisante says.

For the past two months, the company has provided the device to beta users at nine U.S. locations. Oliver Aalami, a vascular surgeon at Valley Medical Center in Renton, Washington, is one of those testers. He's using Mobisante's device to guide the placement of central lines, which are large-bore catheters used in veins. He carries it in his lab coat pocket or his briefcase, which is much more convenient than his medical center's ultrasound machine, housed on a cart in the operating room. "To use that machine, I have to wheel it out of the OR, and then take the elevator up four floors to the ICU," he says.

With Mobisante's device, he can head right to a patient's room for an ultrasound, and he doesn't have to rearrange the furniture to reach the bedside. "When you're in a tight room, you can't always get an ultrasound machine in without taking chairs out and moving the bed," he says. His beta device isn't connected to a network, so Aalami hasn't tested that capability yet. But he says the ultra-portable design is appealing in itself: "It saves me time and lets me worry about other things. I can focus on the rest of my practice."

Mobisante's device uses the Toshiba TG01 smart phone and an ultrasound probe manufactured by Interson. The probe is a tweaked version of Interson's ultrasound probe that connects to a laptop USB port. Zar and a Washington University colleague designed the probe several years ago and licensed it to the company. Together, the phone and probe weigh about 13 ounces.

The new device would not be the only handheld ultrasound device in the marketplace. Several others have already received FDA clearance, including GE's Vscan and Siemens Acuson P10. Some of the existing devices display color-coded images, which can show blood flow; Mobisante's device shows black-and-white images. There's also the question of how Mobisante's pricing will stack up to competitors—GE says its Vscan device costs $7,900.

What makes Mobisante's device interesting is that it can connect directly to a cellular network or Wi-Fi, allowing the user to send images with the push of a button. Handheld ultrasound devices currently on the market can't e-mail images directly— a user has to transfer them to a PC first, either with a docking station or by removing the device's memory card.

Built-in connectivity isn't the only advantage of a smart-phone-based system, says Chutani. While other makers of handheld ultrasound devices are wedded to custom hardware, Mobisante expects to reduce its costs as smart-phone technology continues to improve. "Billions of dollars are being spent to make this platform more powerful, so it makes sense to ride that investment rather than try to duplicate it," Chutani says.

By Sandra Swanson
From Technology Review

Physicists Conjure the First Super-Photon, Creating a Whole New Kind of Light Source

Physicists from the University of Bonn are looking at things in a whole new light, quite literally. Through the clever use of mirrors and some smart science, researchers there have created a wholly new source of light by cooling photons to the point that they condense into a “super photon.” The so-called Bose-Einstein condensate made up of photons was, until now, thought impossible.


The Super-Photon The University of Bonn team in the lab (left) and an artist's rendering of their "super-photon." Volker Lannert / University of Bonn (left) and Jan Klaers/University of Bonn

“Super particles” have been created before, but never out of light. For instance, take rubidium atoms down to a low enough temperature in a compact space, and they quickly become indistinguishable, behaving like a single particle (known as a Bose-Einstein condensate). And in theory, this should also work with photons. But it does not, for if you start to cool photons down they disappear. Perhaps expectedly, light doesn’t chill very well. Think about a light bulb; if you apply a current, the filament gets hot and begins to give off light of different colors – red then yellow then blue. Scientists measure this kind of light-heat against a theoretical model known as a black body, as in it’s dark until you heat it to a certain temperature where it begins to give off light at different wavelengths depending on temperature (click through the source link below for more on this). 

When a black body cools down, at some point it no longer radiates light in the visible spectrum, giving off infrared photons instead. Therein lies the problem with photons – as temperature and radiation intensity decrease, so does the number of photons. Keeping them together in quantity while cooling them has presented a fundamental problem for creating the Bose-Einstein condensate composed of photons.

The key to keeping the photons form dissipating: keep them moving. The U. of Bonn team used mirrors to bounce the photons back and forth between two mirrors. Every so often the photons collided with dissolved pigment molecules that were placed between the reflective surfaces, molecules that essentially absorbed the photons and spit them out with each collision. But with each collision, the photons slowly assumed the temperature of the pigment molecules, cooling one another to room temperature without being lost in the process.

Physics aside, the discovery is cool for a variety of reasons. Most notably, it’s an entirely new kind of light with vast industrial implications, especially in the chip-making sector. Currently, laser’s don’t operate in the really short wavelengths like UV and X-ray. With a photonic Bose-Einstein condensate, the researchers say this should be possible. 

The inability to etch chips with lasers in the shorter wavelengths has limited how precisely they can design circuits on silicon. Finer etching begets higher-performing microchips, and that’s just a start. When you create a whole new kind of light, everything from medical imaging and laboratory spectroscopy to photovoltaics could stand to benefit.

By Clay Dillow
From popsci.com 

New Onboard Converter Technology Harvests Auto Engine Exhaust to Generate Electricity

Future cars may eat their own exhaust, converting heat from their emissions into electricity. The conversion can improve fuel economy by reducing an engine’s workload.

 Thermoelectric Generators Purdue mechanical engineering doctoral student Yaguo Wang works with a high-speed laser at the Birck Nanotechnology Center to study thermoelectric generators.

Purdue University researchers are working with General Motors to build thermoelectric generators, which produce an electric current when there is a temperature difference. Starting in January, the team will install an initial prototype behind a car’s catalytic converter, where it will harvest heat from exhaust gases that can reach 1,300 degrees F. The prototype involves small metal chips a few inches square. The process requires special metals that can withstand a huge temperature differential — the side facing the hot gases stays warm, and the other side must stay cool. The difference must be maintained to generate a current, explains Xianfan Xu, a Purdue professor of mechanical engineering who is leading the research. 

One of the group’s biggest challenges is finding a metal that conducts heat poorly, so heat is not transferred from one side of the chip to the other. GM researchers are currently testing something called skutterudite, a mineral made of cobalt, arsenide, nickel, or iron. Rare-earth metals can reduce skutterudite’s thermal conductivity even more, but we all know how problematic rare-earths can be; to solve the problem, researchers are hoping to replace them with “mischmetal” alloys.

Thermoelectric technology has applications beyond powering cars — they could also be used to harness waste heat in homes and power plants, and they could power new generations of solar cells, Xu said. The work is being funded with $1.4 million from the National Science Foundation and the Department of Energy.

By Rebecca Boyle
From popsci.com 

A Greener Way to Make Plastic

Chemical refineries are great at converting petroleum into gasoline and the building blocks of plastics and other consumer goods. But when it comes to sustainable starting materials, such as wood chips, corn stalks, or other plant "biomass," refineries are too inefficient to make the process commercially viable. Researchers have now given that efficiency a major boost, perhaps enough of one to allow us to leave petroleum behind. 

Biochemistry. A portable biorefinery for pyrolysis oil production.

There are plenty of ways to convert biomass into useful fuels and chemicals. But each has drawbacks. Yeast and other microbes can ferment plant sugars into ethanol, a gasoline additive. But only moderate amounts of ethanol can be added to gasoline without requiring engine modifications. Algae readily produce bio-oils, but the technology remains costly and requires too much land and fresh water to make an impact on the market. A third route, known as pyrolysis, heats dried and ground biomass to about 550˚C in an oxygen-depleted chamber (so the biomass doesn't burn), producing a mixture of gases, liquids, and a gray, carbon-rich solid called coke. When the gases cool and condense, they combine with the liquids to form a mixture of oils. These oils are cheap: It costs only $1 to make oil through pyrolysis that has the same energy content as a gallon of gasoline. But they must be further chopped into smaller hydrocarbons before they are suitable for industrial use. In addition, oxygen-rich acids in the oil make it corrosive, so it can't be used in conventional engines and storage containers. 

Engineers have worked to tackle these problems with pyrolysis oils by adding a second treatment step, where the oils react with hydrogen over catalysts called zeolites. The hydrogen replaces oxygen in the acids and other compounds in the bio-oils and makes them less corrosive, and the zeolites break the large hydrocarbons into compounds such as toluene and benzene that are commonly used building blocks for a large number of industrial chemicals. The problem is that coke and other substances made by pyrolysis can gum up zeolites. Only 20% of the pyrolysis oils are converted to useful chemicals. Most of the rest winds up as coke, carbon monoxide, and carbon dioxide. 

In hopes of improving the efficiency, George Huber, a chemical engineer at the University of Massachusetts, Amherst, and colleagues tested several combinations of zeolites (hundreds are known) and reaction conditions and found one standout. They split the second treatment step in two. First, they reacted their pyrolysis oils with hydrogen over a ruthenium and platinum catalyst, which stripped out much of the oxygen from the acids and added hydrogen. This made a mix of stable compounds that were less likely to form coke when they were processed in the second step over the zeolites. It also allowed the zeolites to convert 60% of the longer hydrocarbon chains into five key chemical starting materials: benzene, toluene, xylene, propylene, and ethylene. Together, these compounds represent five of the seven key starting materials (the others are methanol and 1,3-butadiene) that form the basis of the $400-billion-a-year petrochemical industry. The process can also be tailored to produce more of individual chemical building blocks, which in the future could allow chemical companies to produce the most valuable building blocks at any given time, the team reports in the 26 November issue of Science.

Robert Brown, who directs the Bioeconomy Institute at Iowa State University, Ames, says the new work is noteworthy because chemical companies have many decades of experience in using heat and catalysts to convert petroleum into a wide variety of commodity chemicals. "There is a notion that thermochemical processing is a mature technology," with little room for improvements, Brown says. "Huber's work demonstrates that there is potential for many advances," as the technology is applied to biomass, he says. Huber says he has formed a start-up company, Anellotech, that plans to commercialize the technology, first with a small pilot plant, followed by a commercial demonstration facility.


news.sciencemag.org

Danger Room What's Next in National Security Previous post Next post Super-Silent Jimmy Carter Ready to Spy on North Korea

It’s not the diplomacy-minded former president who is ready to spy, it’s the secretive nuclear submarine named for him. The surveillance and attack capabilities it’s supposed to have could keep the tense situation on the Korean peninsula from spiraling out of control.

In the wake of yesterday’s North Korean artillery barrage against a South Korean island, the U.S.S. George Washington is sailing to South Korea to participate in joint exercises. 


A statement from the Navy’s Seventh Fleet, which patrols the western Pacific, says the drill was planned before the “unprovoked” North Korean attack, but will demonstrate “the strength of the [South Korea]-U.S. Alliance and our commitment to regional stability through deterrence.” In other words: to stave off another attack, not to initiate a retaliation.

The George Washington aircraft carrier is equipped with 75 planes and around 6,000 sailors. But it’s not coming alone. It’s got the destroyers Lassen, Stethem and Fitzgerald with it, and the missile cruiser Cowpens in tow. Rumor also has it that the carrier strike group will link up with another asset in area: The undersea spy known as the Jimmy Carter, which can monitor and potentially thwart North Korean subs that might shadow the American-South Korea exercises.

According to plugged-in naval blogger Raymond Pritchett, word’s going around Navy circles that the first surveillance assets that the United States had in the air over yesterday’s Korean island battle were drones launched from the Jimmy Carter

“North Korea couldn’t detect the USS Jimmy Carter short of using a minefield, even if they used every sonar in their entire inventory,” Galrahn writes. That’ll matter in case North Korea decides to launch another torpedo attack from a submarine, as it did in March to sink the South Korean corvette Cheonan.

The Navy doesn’t say much about what the Jimmy Carter can do, but the consensus is that it’s used for “highly classified missions.” Reportedly, it can tap undersea fiber-optic cables, potentially intercepting North Korean commands. 

It carries Navy SEALs to slip into enemy ports undetected. And its class of subs have 26-and-a-half-inch-diameter torpedo tubes, wider than the rest of the submarine fleet, in case the Carter has to take out rival ships. “That’s a Seawolf, the most powerful attack sub in the world,” says Robert Farley, a maritime and international-relations scholar at the University of Kentucky.

All that might be intended to keep the North Koreans from trying something during the exercises, scheduled to run from December 3 through 10. As bellicose as they’ve been this year, they’d be up against a carrier strike group on the lookout for North Korean aggression. 

The North’s 10 Yeono-class midget submarines — tiny subs with a crew of only a few sailors designed mostly for firing torpedoes — is “only mildly more capable than the submarines the Nazis were using in 1945,” Farley says, but “if there’s a nervous or adventurous North Korean sub skipper out there, we could have a real problem.”

The real role of the George Washington’s carrier strike group is floating diplomacy and deterrence, signaling “the close security cooperation between our two countries, and to underscore the strength of our Alliance and commitment to peace and security in the region,” as the White House’s account of a phone call between the U.S. and South Korean presidents last night put it. 

And the Armed Forces Communications and Electronics Association’s influential NightWatch newsletter doubts that North Korea is really preparing for war: It doesn’t appear to have issued new military alerts, and it’s competing in the Chinese-sponsored Asian Games.

But should its submarines get ready to harass the United States during next month’s exercises, chances are the Jimmy Carter will see it first.

By Spencer Ackerman
From wired.com

The Key Ingredient to Effective Cancer Treatments

About 50 percent of cancer patients have tumors that are resistant to radiation because of low levels of oxygen—a state known as hypoxia. A startup in San Francisco is developing proteins that could carry oxygen to tumors more effectively, increasing the odds that radiation therapy will help these patients.

Last month, the National Cancer Institute (NCI) gave that startup, Omniox, $3 million in funding. Omniox is collaborating with researchers at the NCI to test whether its oxygen-carrying compounds improve radiation therapy in animals with cancer.

 This image shows a mouse’s legs, with a tumor in the left leg. Hypoxic regions are indicated in light blue.

Most tumors have hypoxic regions, and researchers believe they have a significant impact on treatment outcomes in about half of patients. Tumor cells proliferate with such abandon that they outstrip their blood supply, creating regions with very low levels of oxygen. This lack of oxygen drives tumor cells to generate more blood vessels, which metastatic cells use to travel elsewhere in the body and spread the cancer.

Radiation therapy depends on oxygen to work. When ionizing radiation strikes a tumor, it generates reactive chemicals called free radicals that damage tumor cells. Without oxygen, the free radicals are short-lived, and radiation therapy isn't effective. "Radiation treatment is given today on the assumption that tumors are oxygenated" and will be damaged by it, says Murali Cherukuri, chief of biophysics in the Center for Cancer Research at the NCI in Bethesda, Maryland. "Hypoxic regions survive treatment and repopulate the tumor." 

Since the 1950s, researchers have tried many ways to get more oxygen into tumors, without success. Having patients breathe high levels of oxygen prior to radiation doesn't work, and developing an agent to carry oxygen through the blood to a tumor has proved very difficult. Artificial proteins that mimic the body's natural oxygen carrier, hemoglobin, can be dangerously reactive—destroying other important chemicals in the blood. And other oxygen carriers tend to either cling to oxygen too tightly or release it too soon, before it gets to the least oxygenated regions of the tumor.

"We're hoping that since most tumors are hypoxic, we could improve the effectiveness of radiation therapy in a large number of people," says Stephen Cary, cofounder and CEO of Omniox. The company has developed a range of proteins that are tailored to hold onto oxygen until they're inside hypoxic tissue. These proteins are not based on hemoglobin, so they don't have the same toxic effects.

The company's technology comes from the lab of Michael Marletta, a professor of chemistry at the University of California, Berkeley. "Most blood substitutes have failed," says Marletta, because they were based on globin proteins, which includes hemoglobin. Hemoglobin is able to work in the body because it's encased in red blood cells. Unprotected, oxygenated globin proteins react with nitric oxide in the blood, destroying the oxygen, the nitric oxide, and the protein itself.

Marletta began looking for protein fragments that bound to oxygen, but not to nitric oxide. He started with the genetic sequence for the section of the globin proteins that binds to oxygen. He then used a computer program to scan through genome databases for similar sequences. This turned up a group of similar sequences in single-celled organisms. Marletta studied these protein sequences and found a group of them that bind to oxygen but not to nitric oxide. By altering the sequences slightly, Marletta found he was able to tailor how tightly the protein binds to oxygen. This level of control means Omniox can design a protein that releases oxygen only when the surrounding levels of the oxygen are very low—meaning the protein must travel all the way to the hypoxic part of the tumor before it releases the oxygen.

Cary, who was formerly a postdoctoral researcher in Marletta's lab, cofounded Omniox in 2006 to develop a therapeutic oxygen-carrying agent. The company has raised a total of about $4 million from the NCI and the University of California's Institute for Quantitative Biosciences. The company is currently housed in the university's biotech startup incubator, the QB3 Garage.

Omniox has so far demonstrated that its proteins accumulate in tumors in living animals, and that the proteins increase the oxygen concentration there.

Studies of the proteins are now underway at the NCI. Cherukuri, who is not affiliated with Omniox, has developed a tracer for use with magnetic resonance imaging that allows him to make a high-resolution, 3-D map of tumor oxygen concentrations.

Cherukuri is using this method to study the effects of Omniox's agents in mice with hypoxic tumors. "When you have a very hypoxic tumor, and you inject the animal with [the Omniox agent], the oxygenation increases," he says. He is working with General Electric to develop a human-scale prototype of this imaging system.

The Omniox and NCI studies are aimed at figuring out which of the company's proteins works best, when the proteins should be administered, and whether the treatment truly improves the effectiveness of radiation therapy. The studies will also look out for any dangerous immune responses to the foreign proteins. If the results are promising, the company hopes to begin tests in human patients in 2013.

By Katherine Bourzac
From Technology Review

Helmet Visor Could Protect Troops From Shock Waves


Adding a face shield to the standard-issue helmet worn by U.S. troops could help protect soldiers from traumatic brain injury, the signature wound of the recent wars in Iraq and Afghanistan. A new study that models how shock waves pass through the head finds that adding a face guard deflects a substantial portion of the blast that otherwise would steamroll its way through the brain. The study, to appear in the Proceedings of the National Academy of Sciences, is part of a spate of new work tackling traumatic brain injury. An estimated 1.5 million Americans sustain mild traumatic brain injury each year, and nearly 200,000 service members have been diagnosed with it since 2000, according to the Armed Forces Health Surveillance Center in Silver Spring, Maryland. 

While direct impact, such as banging the head, clearly can injure the brain, the forces endured when explosives send shock waves crashing through the head are much more difficult to characterize.

In the new study, researchers led by Raúl Radovitzky of MIT’s Institute for Soldier Nanotechnologies created an elaborate computer model of a human head that included layers of fat and skin, the skull, and different kinds of brain tissue. The team modeled the shock wave from an explosion detonated right in front of the face under three conditions: with the head bare, protected by the currently used combat helmet and covered with the helmet plus a polycarbonate face shield.

The results showed that today’s helmet doesn’t exacerbate the damage, as some previous research had suggested. But at least in terms of blast protection, the current helmet doesn’t help much either. Addition of a face shield would improve matters, the team reports.

“The face shield contributes a lot to deflecting energy from the blast wave and not letting it directly touch the soft tissue,” says Radovitzky. “We’re not saying this is the best design for a face shield, but we’re saying we need to cover the face.”

To validate the model, researchers at MIT and elsewhere will have to conduct experiments in the real world. But the work points to an intrinsic flaw in the current helmets.

“These helmets weren’t designed to stop a pressure wave; they were designed to stop bullets,” says Albert King, director of the Bioengineering Center at Wayne State University in Detroit. “Just like a football helmet wasn’t designed to stop a concussion, but to stop skull fracture.”

Designing a blast-resistant helmet requires a better knowledge of what happens in the brain when an explosion washes over it. Soldiers experiencing explosions often describe a wind or wave that makes them see stars. “I really got my bell rung,” is a common report.

The resulting “mild” traumatic brain injury doesn’t lead to long-term loss of consciousness, and brain scans yield normal results. But labeling these injuries as mild is a misnomer, says Douglas Smith, director of the Center for Brain Injury and Repair at the University of Pennsylvania in Philadelphia.

“It is not mild; that term has led people astray,” says Smith. “It is something very serious that can lead to severe dysfunction.”

Smith and his colleagues have been working on a sensor that could be placed in a helmet or vehicle and that, like the radiation badges worn by nuclear-plant workers, would indicate exposure to blast forces likely to cause brain injury. The sensor is described in a paper to be published in NeuroImage.

While a sensor would indicate exposure to blast forces, it still isn’t clear exactly how that energy translates into brain trauma. Under everyday conditions, the brain can easily withstand a little jostling. “Plop down in your chair and your brain blobs around like Jell-O,” Smith says. But at tremendously high speeds, instead of gently stretching, brain cells can snap and break (SN: 3/13/10, p. 11) like glass.

The long-term effects of these busted brain cells are largely unknown. In addition to chronic headaches, vertigo and difficulty remembering words, research suggests that when the brain shuts down for even a few minutes, depression is more likely down the road.

Scott Matthews, a psychiatrist at the University of California, San Diego, who studies mild traumatic brain injury in returning veterans, notes that causality can’t be established. But among soldiers who were exposed to combat, he sees depression twice as often in people with traumatic brain injury.

“There’s more and more evidence that loss of consciousness changes the brain,” Matthews says.
Unraveling cause and effect and designing useful experiments to illuminate traumatic brain injury and its aftermath remains extremely challenging. And translating those scientific findings into meaningful policy can be just as difficult. Even implementing something as simple as a helmet with a face shield poses problems, says Smith.

“How do you deploy something like that?” he asks. “There are practical things like temperature issues. And then there’s wanting soldiers to be able to meet and greet in villages without looking like spacemen.”
Image: Computer model of the head./Michelle Nyein.

By Rachel Ehrenberg
From Science News

New Microscope Reveals Ultrastructure of Cells

The new microscope delivers a high-resolution 3-D image of the entire cell in one step. This is an advantage over electron microscopy, in which a 3-D image is assembled out of many thin sections. This can take up to weeks for just one cell. Also, the cell need not be labelled with dyes, unlike in fluorescence microscopy, where only the labelled structures become visible. The new X-ray microscope instead exploits the natural contrast between organic material and water to form an image of all cell structures. Dr. Gerd Schneider and his microscopy team at the Institute for Soft Matter and Functional Materials have published their development in Nature Methods.

 This is a slice through the nucleus of a mouse adenocarcinoma cell showing the nucleolus and the membrane channels running across the nucleus; taken by X-ray nanotomography.

With the high resolution achieved by their microscope, the researchers, in cooperation with colleagues of the National Cancer Institute in the USA, have reconstructed mouse adenocarcinoma cells in three dimensions. The smallest of details were visible: the double membrane of the cell nucleus, nuclear pores in the nuclear envelope, membrane channels in the nucleus, numerous inva­ginations of the inner mitochondrial membrane and inclusions in cell organelles such as lysosomes. Such insights will be crucial for shedding light on inner-cellular processes: such as how viruses or nanoparticles penetrate into cells or into the nucleus, for example.

This is the first time the so-called ultrastructure of cells has been imaged with X-rays to such precision, down to 30 nanometres. Ten nanometres are about one ten-thousandth of the width of a human hair. Ultrastructure is the detailed structure of a biological specimen that is too small to be seen with an optical microscope.

Researchers achieved this high 3-D resolution by illuminating the minute structures of the frozen-hydrated object with partially coherent light. This light is generated by BESSY II, the synchrotron source at HZB. Partial coherence is the property of two waves whose relative phase undergoes random fluctuations which are not, however, sufficient to make the wave completely incoherent. Illumination with partial coherent light generates significantly higher contrast for small object details compared to incoherent illumination. Combining this approach with a high-resolution lens, the researchers were able to visualize the ultrastructures of cells at hitherto unattained contrast.

The new X-ray microscope also allows for more space around the sample, which leads to a better spatial view. This space has always been greatly limited by the setup for the sample illumination. The required monochromatic X-ray light was created using a radial grid and then, from this light, a diaphragm would select the desired range of wavelengths. The diaphragm had to be placed so close to the sample that there was almost no space to turn the sample around. The researchers modified this setup: Monochromatic light is collected by a new type of condenser which directly illuminates the object, and the diaphragm is no longer needed. This allows the sample to be turned by up to 158 degrees and observed in three dimensions. These developments provide a new tool in structural biology for the better understanding of the cell structure.

From sciencedaily.com

How to Train Your Own Brain

Technology might not be advanced enough yet to let people read someone else's mind, but researchers are at least inching closer to helping people to read and control their own. In a study presented last week at the Society for Neuroscience meeting in San Diego, scientists used a combination of brain-scanning and feedback techniques to train subjects to move a cursor up and down with their thoughts. The subjects could perform this task after just five minutes of training.

 Brainstorm: This fMRI scan highlights areas that are most active during two thought processes: One (SMA) is active when subjects think about tennis, the other (PPA) lights up when they imagine roaming through a familiar space.

The scientists hope to use this information to help addicts learn to control their own brain states and, consequently, their cravings.

Scientists have previously shown that people can learn to consciously control their brain activity if they're shown their brain activity data in real time—a technique called real-time functional magnetic resonance imaging (fMRI). Researchers have used this technology effectively to teach people to control chronic pain and depression. They've been pursuing similar feedback methods to help drug users kick their addictions.

But these efforts have been difficult to put into practice. Part of the problem is that scientists have had to choose which part of the brain to focus on, based on existing knowledge of neuroscience. But that approach may miss out on areas that are also important for the particular function under study.

In addition, focusing on a limited region adds extra noise to the system—much like looking too closely at just one swatch of a Pointillist painting—the mix of odd colors doesn't make sense until you step back and see how the dots fit together. Psychologist Anna Rose Childress, Jeremy Magland, and their colleagues at the University of Pennsylvania have overcome this issue by designing a new system of whole-brain imaging and pairing it with an algorithm that let them determine which regions of the brain are most centrally involved in a certain thought process.

"I think it's very exciting, and I think it's likely to be just the tip of a large iceberg of possibilities," says Christopher deCharms, a neuroscientist and founder of Omneuron, a company dedicated to using real-time fMRI to visualize brain function. "It's a small case demonstration that you can do this and you can do it in real time."

Childress asked 11 healthy controls and three cocaine addicts to watch a feedback screen while alternately envisioning two 30-second scenarios: Repeatedly swatting a tennis ball to someone, and navigating from room to room in a familiar place. By analyzing whole-brain activity, researchers found that a part of the brain called the supplementary motor area was most active during an imagined game of tennis. They then linked this pattern to an upward movement of a computer cursor. They did the same with the navigation task, linking it to downward movement of the cursor. After four cycles or fewer—less than five minutes of training—the subjects had learned to alternate between the two states of mind, as well as associate each one with its corresponding cursor position. From there onward, they could move the cursor up or down with their thoughts.

"Conventional technology used up until now monitors a designated region of the brain, but the data tend to be noisy," Childress says. As a result, it's harder for researchers to determine what regions of the brain are important to control for feedback exercises. "But whole-brain information cancels out a lot of the noise."

The researchers found that both addicts and healthy people could control their state of mind equally well, something Childress says is encouraging for future studies. "The patients who have trouble controlling their craving could still demonstrate control over this sort of non-emotional test," she says. That confirms what earlier studies had suggested: Addicts' cognitive control issues are not linked to more general thinking, but instead limited to more emotionally charged thoughts, like cravings.

However, Childress's team will need to develop specialized tasks to figure out how to apply this to addiction and other disorders. For therapy, "You really need feedback from localized regions that have to do with their disease, and have people learn to control them," says Rainer Goebel, a professor of psychology at the University of Maastricht in the Netherlands who has done similar work with depression patients.

The University of Pennsylvania researchers are now developing just such a training program. For example, researchers might show cocaine addicts images or videos that involve stereotyped cocaine images, classify the brain region, and then use brain training to teach people how to dampen the activity in that part of the brain.

By Lauren Gravitz
From Technology Review

How Brain Imaging Could Help Predict Alzheimer's

Developing drugs that effectively slow the course of Alzheimer's disease has been notoriously difficult. Scientists and drug developers believe that a large part of the problem is that they are testing these drugs too late in the progression of the disease, when significant damage to the brain makes intervention much more difficult.

"Drugs like Lilly's gamma secretase inhibitor failed because they were tested in the wrong group of patients," says Sangram Sisodia, director of the Center for Molecular Neurobiology at the University of Chicago. People in the mid or late stages of the disease "are too far gone, there is nothing you can do."

 Brain shrinking: Imaging reveals that people with mild cognitive impairment who are likely to develop Alzheimer’s disease have thinning in certain parts of the brain (shown in blue.)

New brain imaging research may help solve that problem. Two studies presented at the Society for Neuroscience conference in San Diego this week identified changes in the brains of people who would go on to develop the disease. Researchers ultimately hope to use these changes to select patients for clinical tests of new drugs before they have developed signs of dementia.

"Brain changes that predict progression will hopefully allow us to detect the disease early, before it has caused irreversible damage," said Sarah Madsen, a graduate student at the University of California, Los Angeles, at a press briefing at the conference.

Recent research has focused on people with a condition known as mild cognitive impairment, which involves memory loss and other cognitive problems and can be a precursor to Alzheimer's. However, not everyone with this disorder will go on to develop the disease. A reliable method of predicting who will develop Alzheimer's would enable drug developers to focus their clinical testing. By testing drugs only in this carefully selected group, drug makers could more easily see the potential benefit of an experimental drug. It would also help them to avoid unnecessarily subjecting people to health risks.

Sarah George, a graduate student at Rush University Medical Center, in Chicago, analyzed brain scans of 47 people with mild cognitive impairment, 22 of whom went on to develop Alzheimer's over the next six years. She focused on a part of the brain called the substantia innominata, which is known to be severely affected in Alzheimer's. Existing drugs for treating the disorder target a chemical messenger, acetylcholine, made by neurons in this part of the brain.

While George didn't find differences in the volume of the substantia innominata between the two groups, she did find differences in the parts of the brain that those neurons connect to. People who went on to develop the disease had significant thinning in three connected areas of the cortex involved in memory, attention, and integration of sensor and motor information.

The results are promising, says Sisodia. He says the findings may also shed light on the earliest progression of the disease. Research from animal studies suggests that synapses—the connections between neurons—are the first part of the brain to suffer. While still just a theory, it's possible that the brain regions that receive input from the substantia innominata shrink before that region itself does because they are losing their incoming synapses. However, he says, larger studies are needed to determine how accurate a predictor of Alzheimer's this measure can be.

In a second study, UCLA's Madsen analyzed MRI scans from 400 elderly individuals, some healthy, some mildly impaired, and some severely impaired, who had previously undergone brain scans, cognitive testing, and other types of medical testing. She focused on a c-shaped region in the center of the brain known as the caudate nucleus, which plays a key role in motor control and attention. Using mathematical tools to compare the size and shape of the caudate nucleus across different groups, she found that the caudate had shrunk most significantly in people with Alzheimer's disease—it was 7 percent smaller than in healthy people. People with mild cognitive impairment also showed some decline, about 4 percent compared to controls. Within the latter group, those who went on to develop Alzheimer's within the next year had a smaller caudate than those who did not.

Before drug developers can begin to use markers such as these to select patients for clinical testing, researchers need to better document early changes associated with the disease, says Sisodia. He points to one ongoing trial in Colombia that involves studying families who carry a genetic mutation that guarantees development of Alzheimer's. Because scientists know approximately when people with the mutation will develop the disease, they can carefully analyze their brains for early changes.

"Before drug trials, we need to better solidify the data," says Sisodia. "That's why doing the study in Colombian people to study the natural history of disease is so important. They can study the progression from a normal individual with perfect cognition to abnormal cognition and look for the cellular correlates of behavior."

By Emily Singer
From Technology Review

Silicon's Long Good-bye

Sometime in the coming decades, chipmakers will no longer be able to make silicon chips faster by packing smaller transistors onto a chip. That's because silicon transistors will simply be too leaky and expensive to make any smaller.

People working on materials that could succeed silicon have to overcome many challenges. Now researchers at the University of California, Berkeley, have found a way past one such hurdle: they've developed a reliable way to make fast, low-power, nanoscopic transistors out of a compound semiconductor material. Their method is simpler, and promises to be less expensive, than existing ones.

 Nanoribbons: Strips of indium arsenide have been chemically etched so that they release from the surface beneath. They can then be transferred to silicon wafers to make speedy, low-power transistors.

Compound semiconductors have better electrical properties than silicon, which means that transistors made from them require less power to operate at faster speeds. These materials are already in some expensive niche applications such as military telecommunications equipment, which gives them a leg up over more exotic potential silicon replacements like graphene and carbon nanotubes.

But wafers of compound semiconductor materials are also very fragile and expensive, "which is only okay where cost doesn't matter," says Ali Javey, associate professor of electrical engineering and computer sciences at the University of California, Berkeley. Compound semiconductors are on the market in expensive communications chips for the military, for example.

Researchers believe they can overcome this fragility and expense by growing compound-semiconductor transistors on top of a supportive silicon wafer—a trick that should be compatible with existing manufacturing infrastructure.

However, compound semiconductors cannot be grown on silicon—there's a mismatch between the crystalline structures of the two materials that makes this difficult to do well. The Berkeley group has now shown that transistors made from compound semiconductors can be grown on another surface and then transferred to a silicon wafer. "That's a plausible path for dealing with the fact that compound semiconductors are difficult to grow," says Jesús del Alamo, professor of electrical engineering and computer science at MIT who was not involved with Javey's work.

The Berkeley researchers demonstrated their technique using indium arsenide. They grow the material on top of a wafer of gallium antimonide protected by a sacrificial top layer of aluminum gallium antimonide. The wafer enables the growth of a high-quality, crystalline indium-arsenide film, and the sacrificial layer can then be chemically etched away, releasing nanoscale indium-arsenide strips. The researchers pick up the nanoribbons with a rubber stamp and place them on top of the silicon wafer. The silicon provides structural support for the indium arsenide. It's coated with silicon dioxide, which will act as the insulator in the transistors. The transistors are completed by laying down metal gates to bring electricity in and out.

Javey's group describes the performance of indium-arsenide transistors made in this way in a paper published online last week in the journal Nature. The transistors, which are 500 nanometers long, perform as well as compound-semiconductor transistors made using more complex techniques, Javey says. And the Berkeley group's indium-arsenide transistors are much faster than their silicon equivalents, while requiring less power—half a volt as compared with 3.3 volts. Their transconductance—how responsive they are to changes in voltage—is eight times better than that for a silicon transistor this size. "Given how these devices were prepared, this performance is quite impressive," says MIT electrical engineering professor Dmitri Antoniadis.

Javey notes that the process required to make the indium-arsenide transistors is similar to that used to make a class of chips called silicon-on-insulator (SOI) electronics, which require a slice of silicon to be placed on a wafer of another material during manufacturing. For that reason he's named them XOI—anything on insulator.
The process for making the XOI devices at wafer-scale would be more complex than SOI because it might require integrating several different types of materials built on wafers of different sizes, says Michael Mayberry, director of components research at Intel. "There are lots of ways that process could go wrong," he says. For the past three years, Intel has been working on processes for growing compound semiconductors on silicon wafers directly, by growing a buffer layer in between them. So far, they have to use a very thick buffer that impedes the performance of the transistors, but Mayberry says they have proven that the concept can work.

The value of Javey's work, Mayberry says, is that it demonstrates that the indium-arsenide transistors perform well when shrunk down to the nanoscale. "We don't know how these devices will behave," he says. Theorists have made guesses, he says, but at the nanoscale, unexpected quantum effects can crop up.

Javey plans to make the transistors much smaller and see whether they maintain their performance. MIT's del Alamo and Antoniadis are trying to determine the ultimate scaling of compound-semiconductor transistors; the pair have made transistors that are 30 nanometers long. "I would like to see what perfection of materials can be achieved at a small scale," says Antoniadis. 

By Katherine Bourzac
From Technology Review

Planet from Another Galaxy Discovered: Galactic Cannibalism Brings an Exoplanet of Extragalactic Origin Within Astronomers' Reach

The results are published in Science Express.
"This discovery is very exciting," says Rainer Klement of the Max-Planck-Institut für Astronomie (MPIA), who was responsible for the selection of the target stars for this study. "For the first time, astronomers have detected a planetary system in a stellar stream of extragalactic origin. Because of the great distances involved, there are no confirmed detections of planets in other galaxies. But this cosmic merger has brought an extragalactic planet within our reach."

 This artist’s impression shows HIP 13044 b, an exoplanet orbiting a star that entered our galaxy, the Milky Way, from another galaxy. This planet of extragalactic origin was detected by a European team of astronomers using the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. The Jupiter-like planet is particularly unusual, as it is orbiting a star nearing the end of its life and could be about to be engulfed by it, giving clues about the fate of our own planetary system in the distant future.


The star is known as HIP 13044, and it lies about 2000 light-years from Earth in the southern constellation of Fornax (the Furnace). The astronomers detected the planet, called HIP 13044 b, by looking for the tiny telltale wobbles of the star caused by the gravitational tug of an orbiting companion. For these precise observations, the team used the high-resolution spectrograph FEROS [3] attached to the 2.2-metre MPG/ESO telescope [4] at ESO's La Silla Observatory in Chile.

Adding to its claim to fame, HIP 13044 b is also one of the few exoplanets known to have survived the period when its host star expanded massively after exhausting the hydrogen fuel supply in its core -- the red giant phase of stellar evolution. The star has now contracted again and is burning helium in its core. Until now, these so-called horizontal branch stars have remained largely uncharted territory for planet-hunters.

"This discovery is part of a study where we are systematically searching for exoplanets that orbit stars nearing the end of their lives," says Johny Setiawan, also from MPIA, who led the research. "This discovery is particularly intriguing when we consider the distant future of our own planetary system, as the Sun is also expected to become a red giant in about five billion years."

HIP 13044 b is near to its host star. At the closest point in its elliptical orbit, it is less than one stellar diameter from the surface of the star (or 0.055 times the Sun-Earth distance). It completes an orbit in only 16.2 days. Setiawan and his colleagues hypothesise that the planet's orbit might initially have been much larger, but that it moved inwards during the red giant phase.

Any closer-in planets may not have been so lucky. "The star is rotating relatively quickly for an horizontal branch star," says Setiawan. "One explanation is that HIP 13044 swallowed its inner planets during the red giant phase, which would make the star spin more quickly."

Although HIP 13044 b has escaped the fate of these inner planets so far, the star will expand again in the next stage of its evolution. HIP 13044 b may therefore be about to be engulfed by the star, meaning that it is doomed after all. This could also foretell the demise of our outer planets -- such as Jupiter -- when the Sun approaches the end of its life.

The star also poses interesting questions about how giant planets form, as it appears to contain very few elements heavier than hydrogen and helium -- fewer than any other star known to host planets. "It is a puzzle for the widely accepted model of planet formation to explain how such a star, which contains hardly any heavy elements at all, could have formed a planet. Planets around stars like this must probably form in a different way," adds Setiawan.

Notes
[1] There have been tentative claims of the detection of extragalactic exoplanets through "gravitational microlensing" events, in which the planet passing in front of an even more distant star leads to a subtle, but detectable "flash." However, this method relies on a singular event -- the chance alignment of a distant light source, planetary system and observers on Earth -- and no such extragalactic planet detection has been confirmed.

[2] Using the radial velocity method, astronomers can only estimate a minimum mass for a planet, as the mass estimate also depends on the tilt of the orbital plane relative to the line of sight, which is unknown. From a statistical point of view, this minimum mass is however often close to the real mass of the planet.

[3] FEROS stands for Fibre-fed Extended Range Optical Spectrograph.

[4] The 2.2-metre telescope has been in operation at La Silla since early 1984 and is on indefinite loan to ESO from the Max-Planck Society (Max Planck Gesellschaft or MPG in German). Telescope time is shared between MPG and ESO observing programmes, while the operation and maintenance of the telescope are ESO's responsibility.

From sciencedaily.com

Antimatter Atoms Stored for the First Time

ALPHA stored atoms of antihydrogen, consisting of a single negatively charged antiproton orbited by a single positively charged anti-electron (positron). While the number of trapped anti-atoms is far too small to fuel the Starship Enterprise's matter-antimatter reactor, this advance brings closer the day when scientists will be able to make precision tests of the fundamental symmetries of nature. Measurements of anti-atoms may reveal how the physics of antimatter differs from that of the ordinary matter that dominates the world we know today.
Large quantities of antihydrogen atoms were first made at CERN eight years ago by two other teams. Although they made antimatter they couldn't store it, because the anti-atoms touched the ordinary-matter walls of the experiments within millionths of a second after forming and were instantly annihilated -- completely destroyed by conversion to energy and other particles.
An artist's impression of an antihydrogen atom -- a negatively charged antiproton orbited by a positively charge anti-electron, or positron -- trapped by magnetic fields.


"Trapping antihydrogen proved to be much more difficult than creating antihydrogen," says ALPHA team member Joel Fajans, a scientist in Berkeley Lab's Accelerator and Fusion Research Division (AFRD) and a professor of physics at UC Berkeley. "ALPHA routinely makes thousands of antihydrogen atoms in a single second, but most are too 'hot'" -- too energetic -- "to be held in the trap. We have to be lucky to catch one."
The ALPHA collaboration succeeded by using a specially designed magnetic bottle called a Minimum Magnetic Field Trap. The main component is an octupole (eight-magnetic-pole) magnet whose fields keep anti-atoms away from the walls of the trap and thus prevent them from annihilating. Fajans and his colleagues in AFRD and at UC proposed, designed, and tested the octupole magnet, which was fabricated at Brookhaven. ALPHA team member Jonathan Wurtele of AFRD, also a professor of physics at UC Berkeley, led a team of Berkeley Lab staff members and visiting scientists who used computer simulations to verify the advantages of the octupole trap.

In a forthcoming issue of Nature now online, the ALPHA team reports the results of 335 experimental trials, each lasting one second, during which the anti-atoms were created and stored. The trials were repeated at intervals never shorter than 15 minutes. To form antihydrogen during these sessions, antiprotons were mixed with positrons inside the trap. As soon as the trap's magnet was "quenched," any trapped anti-atoms were released, and their subsequent annihilation was recorded by silicon detectors. In this way the researchers recorded 38 antihydrogen atoms, which had been held in the trap for almost two-tenths of a second.

"Proof that we trapped antihydrogen rests on establishing that our signal is not due to a background," says Fajans. While many more than 38 antihydrogen atoms are likely to have been captured during the 335 trials, the researchers were careful to confirm that each candidate event was in fact an anti-atom annihilation and was not the passage of a cosmic ray or, more difficult to rule out, the annihilation of a bare antiproton.
To discriminate among real events and background, the ALPHA team used computer simulations based on theoretical calculations to show how background events would be distributed in the detector versus how real antihydrogen annihilations would appear. Fajans and Francis Robicheaux of Auburn University contributed simulations of how mirror-trapped antiprotons (those confined by magnet coils around the ends of the octupole magnet) might mimic anti-atom annihilations, and how actual antihydrogen would behave in the trap.

Learning from antimatter
Before 1928, when anti-electrons were predicted on theoretical grounds by Paul Dirac, the existence of antimatter was unsuspected. In 1932 anti-electrons (positrons) were found in cosmic ray debris by Carl Anderson. The first antiprotons were deliberately created in 1955 at Berkeley Lab's Bevatron, the highest-energy particle accelerator of its day.

At first physicists saw no reason why antimatter and matter shouldn't behave symmetrically, that is, obey the laws of physics in the same way. But if so, equal amounts of each would have been made in the big bang -- in which case they should have mutually annihilated, leaving nothing behind. And if somehow that fate were avoided, equal amounts of matter and antimatter should remain today, which is clearly not the case.

In the 1960s, physicists discovered subatomic particles that decayed in a way only possible if the symmetry known as charge conjugation and parity (CP) had been violated in the process. As a result, the researchers realized, antimatter must behave slightly differently from ordinary matter. Still, even though some antiparticles violate CP, antiparticles moving backward in time ought to obey the same laws of physics as do ordinary particles moving forward in time. CPT symmetry (T is for time) should not be violated.

One way to test this assumption would be to compare the energy levels of ordinary electrons orbiting an ordinary proton to the energy levels of positrons orbiting an antiproton, that is, compare the spectra of ordinary hydrogen and antihydrogen atoms. Testing CPT symmetry with antihydrogen atoms is a major goal of the ALPHA experiment.

How to make and store antihydrogen
To make antihydrogen, the accelerators that feed protons to the Large Hadron Collider (LHC) at CERN divert some of these to make antiprotons by slamming them into a metal target; the antiprotons that result are held in CERN's Antimatter Decelerator ring, which delivers bunches of antiprotons to ALPHA and another antimatter experiment.

Wurtele says, "It's hard to catch p-bars" -- the symbol for antiproton is a small letter p with a bar over it -- "because you have to cool them all the way down from a hundred million electron volts to fifty millionths of an electron volt."

In the ALPHA experiment the antiprotons are passed through a series of physical barriers, magnetic and electric fields, and clouds of cold electrons, to further cool them. Finally the low-energy antiprotons are introduced into ALPHA's trapping region.

Meanwhile low-energy positrons, originating from decays in a radioactive sodium source, are brought into the trap from the opposite end. Being charged particles, both positrons and antiprotons can be held in separate sections of the trap by a combination of electric and magnetic fields -- a cloud of positrons in an "up well" in the center and the antiprotons in a "down well" toward the ends of the trap.

To join the positrons in their central well, the antiprotons must be carefully nudged by an oscillating electric field, which increases their velocity in a controlled way through a phenomenon called autoresonance.

"It's like pushing a kid on a playground swing," says Fajans, who credits his former graduate student Erik Gilson and Lazar Friedland, a professor at Hebrew University and visitor at Berkeley, with early development of the technique. "How high the swing goes doesn't have as much to do with how hard you push or how heavy the kid is or how the long the chains are, but instead with the timing of your pushes."

The novel autoresonance technique turned out to be essential for adding energy to antiprotons precisely, in order to form relatively low energy anti-atoms. The newly formed anti-atoms are neutral in charge, but because of their spin and the distribution of the opposite charges of their components, they have a magnetic moment; provided their energy is low enough, they can be captured in the octupole magnetic field and mirror fields of the Minimum Magnetic Field Trap.

Of the thousands of antihydrogen atoms made in each one-second mixing session, most are too energetic to be held and annihilate themselves against the trap walls.

Setting the ALPHA 38 free
After mixing and trapping -- plus the "clearing" of the many bare antiprotons that have not formed antihydrogen -- the superconducting magnet that produces the confining field is abruptly turned off -- within a mere nine-thousandths of a second. This causes the magnet to "quench," a quick return to normal conductivity that results in fast heating and stress.

"Millisecond quenches are almost unheard of," Fajans says. "Deliberately turning off a superconducting magnet is usually done thousands of times more slowly, and not with a quench. We did a lot of experiments at Berkeley Lab to make sure the ALPHA magnet could survive multiple rapid quenches."

From the start of the quench the researchers allowed 30-thousandths of a second for any trapped antihydrogen to escape the trap, as well as any bare antiprotons that might still be in the trap. Cosmic rays might also wander through the experiment during this interval. By using electric fields to sweep the trap of charged particles or steer them to one end of the detectors or the other, and by comparing the real data with computer simulations of candidate antihydrogen annihilations and look-alike events, the researchers were able to unambiguously identify 38 antihydrogen atoms that had survived in the trap for at least 172 milliseconds -- almost two-tenths of a second.

Says Fajans, "Our report in Nature describes ALPHA's first successes at trapping antihydrogen atoms, but we're constantly improving the number and length of time we can hold onto them. We're getting close to the point where we can do some classes of experiments on antimatter atoms. The first attempts will be crude, but no one has ever done anything like them before."

From sciencedaily.com