Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........

Transcending the Human, DIY Style

BERLIN — Lepht Anonym wants everyone to know the door to transcending normal human capabilities is no farther away than your own kitchen. It’s just going to hurt like a sonofabitch.

An articulate advocate for practical transhumanism.

Anonym is a biohacker, a woman who has spent the last several years learning how to extend her own senses by putting tiny magnets and other electronic devices under her own skin, allowing her to feel electromagnetic fields, or — if her latest project works — even magnetic north.

Since doctors won’t help her, she does it in her own apartment, sterilizing her equipment (needles, scalpels, vegetable peelers) with vodka. Good anesthetic is largely impossible to buy, so she screams a little, and sometimes passes out. But it’s worth it, for what’s on the other side.

“Bodily health takes a big fuck-off second seat to curiosity,” she says. “Though it hasn’t really changed my life, it’s just made me more curious.”

This is DIY transhumanism, the fringe of a movement that itself lies well outside the mainstream of philosophy, ethics, technology and science.

For decades, transhumanists have argued that science and technology are approaching (or have approached) the point at which humans can take evolution into their own hands. They can transcend limitations of sensation or movement or even lifespan that are purely the accident of evolution. Some thinkers focus strictly on the “post-human” physical body, while others write of evolved social systems, as well.

Anonym’s vision of the transhuman is rather different. Less visionary, possibly, but more realistic. What she does is “grinding,” with homemade cybernetics and an intimate familiarity with medical mistakes, driven by a consuming curiosity rather than a philosophical creed. 

She does her own surgery, with a scalpel and a spotter to catch her if she passes out, and an anatomy book to give her some confidence she isn’t going to slice through a vein or the very nerves she’s trying to enhance.
“The existing transhumanist movement is lame. It’s nano everything. It’s just ideas,” she says. “Anyone can do this. This is kitchen stuff.”

Visiting Berlin to speak at this week’s Chaos Computer Club Congress, Anonym proves to be witty and articulate, a slender woman with spiky black hair and dark makeup around her eyes. She has a way of moving as she talks that suggests thought is a kind of physical thing for her too, like the electromagnetic fields she can sense with her modified fingertips. 

She has tattoos and piercings on her face, but there’s nothing obvious to indicate her practice — even her fingers look smooth and unscarred, though the metal discs can be felt faintly under one pad.

The Aberdeen, Scotland, native got her start about two years ago, experimenting first with RFID sensors under her skin that let her do things like lock a computer specifically to her signature. That was a decent start, but didn’t scratch the itch entirely. (Anyway, she says now, RFID is crap as a personal security system, it’s really only a way to experiment with the implant techniques.)

She moved on to trying a transdermal (emerging through the skin) temperature sensor, which would show a variable level of brightness to indicate the temperature. It was a disaster, she says. Mostly she learned rather uncomfortably that waterproofing is not the same as “bioproofing” something. She gave up quickly on the transdermal idea, but not the broader project.

An American body-modification artist of a similar mindset has created small metal discs of neodymium metal, coated in gold and silicon, which give off mild electric current when in a electromagnetic field. When inserted under the fingertips, this current stimulates the fingers’ nerve endings, allowing the bearer to literally feel the shape and strength of electromagnetic fields around power cords or electronic devices.

Anonym had several of these implanted professionally, choking at the cost, and then learned it was possible to buy the metal herself in bulk, far more cheaply.

So she began experimenting with homebrewed sensors. The metal itself is extremely toxic, so she needed a coating to bioproof it, finding a solution ultimately in a silicon putty-like substance called Sugru. But hot-gun glue works fine too, she says. (“I have lots of things in me coated in hot-gun glue,” she says.)
The upshot was an affordable way to continue — all 10 fingertips for about 20 British pounds. She has one left to go.

She’s calling her next project the “Southpaw.” It’s based on the Northpaw, a wearable device created by the Sensebridge group of wearable-electronics hackers. The Northpaw is worn around the ankle and gives a constant gentle motor-derived vibration on whichever side is facing north.

It’s not finished yet, but Anonym is trying to give something internal the same function — a small compass chip, a power coil that can be charged externally, and output in the form of neural-grade electrodes, all to be implanted near her left knee. It’s a much bigger project than her others, and probably riskier. She doesn’t care.

She wants other people to share her DIY vision. It’s not the full transhumanist idea, it’s not immortality or superpowers — but even living without the gentle sensation of feeling the invisible is a difficult thing to imagine, she says. One of the implants stopped functioning once, and she describes it as like going blind.

But it isn’t for everybody, this cutting yourself up in your own kitchen. She’s the first to warn people that it hurts. A lot. Every time, you don’t get used to it. Afterward, people may not be inclined to understand, to put it mildly. (“Avoid normal people,” she warns. “They’re stupid.”)

The medical consequences can be both severe and likely to elicit hostility from doctors. She’s put herself in the hospital several times. She nearly lost a fingertip the first time she tried to implant a neodymium disc herself. Various experiments with bioproofing have failed, with implants rusting under her skin, or her own self-surgeries turning septic.

But if that list of horrors isn’t enough to scare someone off, she’s also eager to help others avoid some of the mistakes she’s made in learning.
“You just have to get deep enough to open a hole and put something in,” she says. “It’s that simple.”

By John Borland 

Climate Models Miss Effects of Wind-Shattered Dust

Clumps of dust in the desert shatter like glass on a kitchen floor. This similarity may mean the atmosphere carries more large dust particles than climate models assume.

Dust and other airborne particles’ effect in the atmosphere is “one of the most important problems we need to solve in order to provide better predictions of climate,” said climate scientist Jasper Kok of the National Center for Atmospheric Research in Boulder, Colorado. Other researchers suspect current models also neglect a large fraction of the climate-warming dust that clogs the skies after dust storms.

Most climate models use dust data from satellites that measure how many particles of different sizes are suspended in the atmosphere. These measurements reveal an abundance of tiny clay particles roughly 2 micrometers across (about one-third the width of a red blood cell), which can reflect sunlight back into space and cool the planet.

But satellites may be missing larger particles, called silts, which don’t hang around in the air as long. Silts up to 20 micrometers in diameter can act as a warm blanket to trap heat inside the Earth’s atmosphere.

To figure out how much clay and silt is actually kicked up from the Earth’s deserts, Kok turned to a well-studied problem in physics: how glass breaks.

Cracks spread through breaking glass in specific patterns, creating predictable numbers and sizes of glass shards. The final distribution of small, medium and large glass fragments follows a mathematical law called scale invariance.

“It shows up all across nature, from asteroids to atomic nuclei,” Kok said. “It’s really just beautiful.”
In a paper published Dec. 28 in the Proceedings of the National Academy of Sciences, Kok showed that the physics of how dust clumps break apart is similar to glass breaking.

Soil scientists have long known that dust clumps act like brittle materials, and physicists have well-tested mathematical descriptions of how brittle materials break. “But no one had put one and two together,” Kok said.

When wind picks up in the desert, Kok says, the particles that move first are large sand particles, up to 500 micrometers across. Silt-sized and smaller dust grains tend to stick together until a bouncing sand particle smacks into them.

“It’s physically analogous to hitting your windshield with a hammer, or dropping a drinking glass on the kitchen floor,” Kok said.

Cracks spread through the clump of soil as they would through a pane of glass, sending the same fraction of small, medium and large particles bouncing into the atmosphere. Kok compared his theory to ground measurements made in the middle of dust storms in six locations around the world and found they matched perfectly.

“Even though we don’t have an abundance of measurements, I think we have sufficient measurements to say this theory is a step in the right direction,” Kok said.

Kok’s theory suggests that dust storms produce two to eight times more silt-sized particles than climatologists previously thought. Neglecting the boost in particles suggests that climate models, and even short-term weather models for dusty regions, are somewhat off. Until climate scientists better understand how dust changes over time, however, Kok said it’s tough to gauge the effects.

“I thought it was a breakthrough, a real original idea,” said atmospheric physicist Charles Zender of the University of California at Irvine, who was not involved in the new work. Similarities to fractured glass may show up in other earth science systems, like earthquakes or glacier calving, he added. “Whether it’s submicron and invisible to the human eye, or as large as Greenland, it doesn’t matter. It’s the same property.”
Dust expert Tom Gill of the University of Texas at El Paso thinks Kok’s theory is elegant, though it will have to be backed up by lab and field experiments. If it holds up, however, “it has the potential to make some real improvements in modeling how dust and dust-like things move around and disperse and fall out of the air. That has implications for everything from global climate to volcanoes to hurricanes,” he said. “I’m very encouraged by it.”

By Lisa Grossman 

MRI Scans Reveal Brain Changes in People at Genetic Risk for Alzheimer's

Researchers at Washington University School of Medicine in St. Louis report in the Dec. 15 issue of The Journal of Neuroscience that these patients had a particular form of the apolipoprotein E (APOE) gene called APOE4. The findings suggest that the gene variant affects brain function long before the brain begins accumulating the amyloid that will eventually lead to dementia.

 Researchers identified functional differences in the brains of APOE4-positive and APOE4-negative people. Red indicates increased connectivity among regions at rest while blue shows decreased connectivity.

"We looked at a group of structures in the brain that make up what's called the default mode network," says lead author Yvette I. Sheline, MD. "In particular, we are interested in a part of the brain called the precuneus, which may be important in Alzheimer's disease and in pre-Alzheimer's because it is one of the first regions to develop amyloid deposits. Another factor is that when you look at all of the structural and functional connections in the brain, the most connected structure is the precuneus. It links many other key brain structures together."

The research team conducted functional MRI scans on 100 people whose average age was 62. Just under half of them carried the APOE4 variant, which is a genetic risk factor for late-onset Alzheimer's disease. Earlier PET scans of the study subjects had demonstrated that they did not have amyloid deposits in the brain. Amyloid is the protein that makes up the senile plaques that dot the brains of Alzheimer's patients and interfere with cognitive function.

Participants in the study also underwent spinal puncture tests that revealed they had normal amyloid levels in their cerebrospinal fluid.

"Their brains were 'clean as a whistle,' " says Sheline, a professor of psychiatry, of radiology and of neurology and director of Washington University's Center for Depression, Stress and Neuroimaging. "As far as their brain amyloid burden and their cerebrospinal fluid levels, these individuals were completely normal. But the people who had the APOE4 variant had significant differences in the way various brain regions connected with one another."

Sheline's team focused on the brain's default mode network. Typically, the default network is active when the mind rests. Its activity slows down when an individual concentrates.

Subjects don't need to perform any particular tasks for researchers to study the default mode network. They simply relax in the MRI scanner and reflect or daydream while the machine measures oxygen levels and blood flow in the brain.

"We make sure they don't go to sleep," Sheline says. "But other than not sleeping, study participants had no instructions. They were just lying there at rest, and we looked at what their brains were doing."

This is the latest in a series of studies in which Sheline and her colleagues have looked at brain function in people at risk for Alzheimer's disease. Initially, her team compared the default mode networks in the brains of people with mild Alzheimer's disease to the same structures in the brains of those who were cognitively normal. In that study, her team found significant differences in how the network functioned.

Then, using PET imaging to identify cognitively normal people who had amyloid deposits in their brains in a second study, they compared those cognitively normal people whose PET scans indicated that their brains contained amyloid to others whose PET scans showed no evidence of amyloid. Again, the default mode network operated differently in those with amyloid deposits.

In the current study, there was no evidence of dementia or amyloid deposits. But still, in those with the APOE4 variant, there was irregular functioning in the default mode network.

APOE4 is the major genetic risk factor for sporadic cases of Alzheimer's disease. Other genes that pass on inherited, early-onset forms of the disease have been identified, but APOE4 is the most important genetic marker of the disease identified so far, Sheline says.

The study subjects, all of whom participate in studies through the university's Charles F. and Joanne Knight Alzheimer's Disease Research Center, will be followed to see whether they eventually develop amyloid deposits. Sheline anticipates many will.

"I think a significant number of them eventually will be positive for amyloid," she says. "We hope that if some people begin to accumulate amyloid, we'll be able to look back at our data and identify particular patterns of brain function that might eventually be used to predict who is developing Alzheimer's disease."

The goal is to identify those with the highest risk of Alzheimer's and to develop treatments that interfere with the progression of the disease, keeping it from advancing to the stage when amyloid begins to build up in the brain and, eventually, dementia sets in.

"The current belief is that from the time excess amyloid begins to collect in the brain, it takes about 10 years for a person to develop dementia," Sheline says. "But this new study would suggest we might be able to intervene even before amyloid plaques begin to form. That could give us an even longer time window to intervene once an effective treatment can be developed."

This work was supported by grants from the National Institute of Mental Health and the National Institute on Aging of the National Institutes of Health.


'Breathalyzers' May Be Useful for Medical Diagnostics

The researchers demonstrated their approach is capable of rapidly detecting biomarkers in the parts per billion to parts per million range, at least 100 times better than previous breath-analysis technologies, said Carlos Martinez, an assistant professor of materials engineering at Purdue who is working with researchers at the National Institute of Standards and Technology.

 This image shows a new type of sensor for an advanced breath-analysis technology that rapidly diagnoses patients by detecting "biomarkers" in a person's respiration in real time. Researchers used a template made of micron-size polymer particles and coated them with much smaller metal oxide nanoparticles. Using nanoparticle-coated microparticles instead of a flat surface allows researchers to increase the porosity of the sensor films, increasing the "active sensing surface area" to improve sensitivity.

"People have been working in this area for about 30 years but have not been able to detect low enough concentrations in real time," he said. "We solved that problem with the materials we developed, and we are now focusing on how to be very specific, how to distinguish particular biomarkers."

The technology works by detecting changes in electrical resistance or conductance as gases pass over sensors built on top of "microhotplates," tiny heating devices on electronic chips. Detecting biomarkers provides a record of a patient's health profile, indicating the possible presence of cancer and other diseases.

"We are talking about creating an inexpensive, rapid way of collecting diagnostic information about a patient," Martinez said. "It might say, 'there is a certain percentage that you are metabolizing a specific compound indicative of this type of cancer,' and then additional, more complex tests could be conducted to confirm the diagnosis."

The researchers used the technology to detect acetone, a biomarker for diabetes, with a sensitivity in the parts per billion range in a gas mimicking a person's breath.

Findings were detailed in a research paper that appeared earlier this year in the IEEE Sensors Journal, published by the Institute of Electrical and Electronics Engineers' IEEE Sensors Council. The paper was co-authored by Martinez and NIST researchers Steve Semancik, lead author Kurt D. Benkstein, Baranidharan Raman and Christopher B. Montgomery.

The researchers used a template made of micron-size polymer particles and coated them with far smaller metal oxide nanoparticles. Using nanoparticle-coated microparticles instead of a flat surface allows researchers to increase the porosity of the sensor films, increasing the "active sensing surface area" to improve sensitivity.

A droplet of the nanoparticle-coated polymer microparticles was deposited on each microhotplate, which are about 100 microns square and contain electrodes shaped like meshing fingers. The droplet dries and then the electrodes are heated up, burning off the polymer and leaving a porous metal-oxide film, creating a sensor.
"It's very porous and very sensitive," Martinez said. "We showed that this can work in real time, using a simulated breath into the device."

Gases passing over the device permeate the film and change its electrical properties depending on the particular biomarkers contained in the gas.

Such breathalyzers are likely a decade or longer away from being realized, in part because precise standards have not yet been developed to manufacture devices based on the approach, Martinez said.
"However, the fact that we were able to do this in real time is a big step in the right direction," he said.


Your Genome in Minutes: New Technology Could Slash Sequencing Time

The researchers have patented an early prototype technology that they believe could lead to an ultrafast commercial DNA sequencing tool within ten years. Their work is described in a study published this month in the journal Nano Letters.

 Dr Joshua Edel shows the prototype chip, and an array of the chips prior to use.

The research suggests that scientists could eventually sequence an entire genome in a single lab procedure, whereas at present it can only be sequenced after being broken into pieces in a highly complex and time-consuming process. Fast and inexpensive genome sequencing could allow ordinary people to unlock the secrets of their own DNA, revealing their personal susceptibility to diseases such as Alzheimer's, diabetes and cancer. Medical professionals are already using genome sequencing to understand population-wide health issues and research ways to tailor individualised treatments or preventions.

Dr Joshua Edel, one of the authors on the study from the Department of Chemistry at Imperial College London, said: "Compared with current technology, this device could lead to much cheaper sequencing: just a few dollars, compared with $1m to sequence an entire genome in 2007. We haven't tried it on a whole genome yet but our initial experiments suggest that you could theoretically do a complete scan of the 3,165 million bases in the human genome within minutes, providing huge benefits for medical tests, or DNA profiles for police and security work. It should be significantly faster and more reliable, and would be easy to scale up to create a device with the capacity to read up to 10 million bases per second, versus the typical 10 bases per second you get with the present day single molecule real-time techniques."

In the new study, the researchers demonstrated that it is possible to propel a DNA strand at high speed through a tiny 50 nanometre (nm) hole -- or nanopore -- cut in a silicon chip, using an electrical charge. As the strand emerges from the back of the chip, its coding sequence (bases A, C, T or G) is read by a 'tunnelling electrode junction'. This 2 nm gap between two wires supports an electrical current that interacts with the distinct electrical signal from each base code. A powerful computer can then interpret the base code's signal to construct the genome sequence, making it possible to combine all these well-documented techniques for the first time.

Sequencing using nanopores has long been considered the next big development for DNA technology, thanks to its potential for high speed and high-capacity sequencing. However, designs for an accurate and fast reader have not been demonstrated until now.

Co-author Dr Emanuele Instuli, from the Department of Chemistry at Imperial College London, explained the challenges they faced in this research: "Getting the DNA strand through the nanopore is a bit like sucking up spaghetti. Until now it has been difficult to precisely align the junction and the nanopore. Furthermore, engineering the electrode wires with such dimensions approaches the atomic scale and is effectively at the limit of existing instrumentation. However in this experiment we were able to make two tiny platinum wires into an electrode junction with a gap sufficiently small to allow the electron current to flow between them."

This technology would have several distinct advantages over current techniques, according to co-author, Aleksandar Ivanov from the Department of Chemistry at Imperial College London: "Nanopore sequencing would be a fast, simple procedure, unlike available commercial methods, which require time-consuming and destructive chemical processes to break down and replicate small sections of the DNA molecules to determine their sequence. Additionally, these silicon chips are incredibly durable compared with some of the more delicate materials currently used. They can be handled, washed and reused many times over without degrading their performance."

Dr Tim Albrecht, another author on the study, from the Department of Chemistry at Imperial College London, says: "The next step will be to differentiate between different DNA samples and, ultimately, between individual bases within the DNA strand (ie true sequencing). I think we know the way forward, but it is a challenging project and we have to make many more incremental steps before our vision can be realised."


Better Control of Building Blocks for Quantum Computer

The scientists' findings have been published in the current issue of the journal Nature (Dec. 23).
A qubit is the building block of a possible, future quantum computer, which would far outstrip current computers in terms of speed. One way to make a qubit is to trap a single electron in semiconductor material. A qubit can, just like a normal computer bit, adopt the states '0' and '1'. This is achieved by using the spin of an electron, which is generated by spinning the electron on its axis. The electron can spin in two directions (representing the '0' state and the '1' state).

Scanning electron image of the nanowire device with gate electrodes used to electrically control qubits, and source and drain electrodes used to probe qubit states.

Until now, the spin of an electron has been controlled by magnetic fields. However, these field are extremely difficult to generate on a chip. The electron spin in the qubits that are currently being generated by the Dutch scientists can be controlled by a charge or an electric field, rather than by magnetic fields. This form of control has major advantages, as Leo Kouwenhoven, scientist at the Kavli Institute of Nanoscience at TU Delft, points out. "These spin-orbit qubits combine the best of both worlds. They employ the advantages of both electronic control and information storage in the electron spin," he says.

There is another important new development in the Dutch research: the scientists have been able to embed the qubits (two) into nanowires made of a semiconductor material (indium arsenide). These wires are of the order of nanometres in diameter and micrometres in length. "These nanowires are being increasingly used as convenient building blocks in nanoelectronics. Nanowires are an excellent platform for quantum information processing, among other applications," says Kouwenhoven.


New Single-Pixel Photo Camera Developed

In 2009, Willard S. Boyle and George E. Smith received the Nobel Prize in Physics for having succeeded in capturing images with a digital sensor. The key was a procedure that recorded the electrical signals generated via the photoelectric effect in a large number of image points, known as pixels, in a short period of time. The CCD sensor in a photo camera acts as the human-eye retinal mosaic. Since their invention, the use of the digital format for recording images has revolutionised various fields, photography amongst them, as it facilitates image processing and distribution.

 The sequence shows the difference between the original image (obtained with a wrong key) and the unencrypted one.

Digital cameras with CCD sensors of 5, 6 and even 12 million pixels are now common. As the dimension of sensors is always the same (typically, 24.7 square millimetres), one may logically think that the higher the number of pixels, the better the image quality will be. However, this idea is not quite right as there are other factors involved, such as the quality of the lens. Conversely, more memory is needed for storing these images (the size of a 6-million pixel digital camera image is about 2 Mb).

In recent years, the world of image technologies has become a booming scientific field, mainly because of biomedical applications. Holographic microscopes, light-operated scissors, laser scalpels, and so on, have enabled the design of minimally invasive diagnosis and surgery techniques. In this context, one amazing possibility that researchers have recently demonstrated is that of capturing high-quality digital images with a sensor using just a single pixel. This technique, baptised by scientists as 'ghost imaging', is based on the sequential recording of the light intensity transmitted or reflected by an object illuminated by a sequence of noisy light beams. This noisy light is what we observe, for example, when we illuminate a piece of paper using a laser pointer.

The GROC researchers have successfully captured 2D object images (such as the UJI logo or the face of one of the maids of honour from the famous Las Meninas painting as reinterpreted by Picasso in 1957) using this amazing single-pixel camera. The key for the success lies in the use of a small 1-inch LCD screen, similar to that used in video projectors or those we have at home, but in miniature. Its properties or features can be modified using a computer in order to generate the necessary light beams.

Furthermore, the researchers from Castelló have demonstrated, for the first time and on a worldwide scale, the possibility of adapting the technique in such a way that it allows an image to be securely sent to a set of authorised users using a public distribution channel, such as the Internet. The information transmitted is a simple numerical sequence that allows the image to be retrieved, but only if one knows the hidden codes enabling the generation of the noise patterns with which the public access information has been created.

The first results of this study, which is still under way, were published in the first July issue of the journal Optics Letters, and a month later Nature Photonics, the main journal in optics, included a review of it in its September issue, in the section containing the most relevant articles published in the field.

The technology applied to the single-pixel camera had not yet been used for image encryption, but it is being studied now by several research groups -- including GROC- to obtain images of biological tissues which, because of their unusual transparency or their location in the more internal parts of the body (some centimetres under surface mucus), are difficult to view using pixelated devices such as those of today's digital cameras. Furthermore, the researchers point out that using this technique for image encryption will improve safety in image transmission, product authentication, or will simply hide information from undesired people, thus making it a highly efficient tool against data phishing.


First High-Temp Spin-Field-Effect Transistor Created

The team has developed an electrically controllable device whose functionality is based on an electron's spin. Their results, the culmination of a 20-year scientific quest involving many international researchers and groups, are published in the current issue of Science.

The team, which also includes researchers from the Hitachi Cambridge Laboratory and the Universities of Cambridge and Nottingham in the United Kingdom as well as the Academy of Sciences and Charles University in the Czech Republic, is the first to combine the spin-helix state and anomalous Hall effect to create a realistic spin-field-effect transistor (FET) operable at high temperatures, complete with an AND-gate logic device -- the first such realization in the type of transistors originally proposed by Purdue University's Supriyo Datta and Biswajit Das in 1989.

 Illustration of the spin-Hall injection device used as a base for the spin-field-effect transistor (FET). A gate on top of the electron channel (not shown) controls the procession of the spin-helix state (shown in upper right panel) and, with this, the output signals measured in the Hall bars.

"One of the major stumbling blocks was that to manipulate spin, one may also destroy it," Sinova explains. "It has only recently been realized that one could manipulate it without destroying it by choosing a particular set-up for the device and manipulating the material. One also has to detect it without destroying it, which we were able to do by exploiting our findings from our study of the spin Hall effect six years ago. It is the combination of these basic physics research projects that has given rise to the first spin-FET."

Sixty years after the transistor's discovery, its operation is still based on the same physical principles of electrical manipulation and detection of electronic charges in a semiconductor, says Hitachi's Dr. Jorg Wunderlich, senior researcher in the team. He says subsequent technology has focused on down-scaling the device size, succeeding to the point where we are approaching the ultimate limit, shifting the focus to establishing new physical principles of operation to overcome these limits -- specifically, using its elementary magnetic movement, or so-called "spin," as the logic variable instead of the charge.

This new approach constitutes the field of "spintronics," which promises potential advances in low-power electronics, hybrid electronic-magnetic systems and completely new functionalities.

Wunderlich says the 20-year-old theory of electrical manipulation and detection of electron's spin in semiconductors -- the cornerstone of which is the "holy grail" known as the spin transistor -- has proven to be unexpectedly difficult to experimentally realize.

"We used recently discovered quantum-relativistic phenomena for both spin manipulation and detection to realize and confirm all the principal phenomena of the spin transistor concept," Wunderlich explains.

To observe the electrical manipulation and detection of spins, the team made a specially designed planar photo-diode (as opposed to the typically used circularly polarized light source) placed next to the transistor channel. By shining light on the diode, they injected photo-excited electrons, rather than the customary spin-polarized electrons, into the transistor channel. Voltages were applied to input-gate electrodes to control the procession of spins via quantum-relativistic effects. These effects -- attributable to quantum relativity -- are also responsible for the onset of transverse electrical voltages in the device, which represent the output signal, dependent on the local orientation of processing electron spins in the transistor channel.

The new device can have a broad range of applications in spintronics research as an efficient tool for manipulating and detecting spins in semiconductors without disturbing the spin-polarized current or using magnetic elements.

Wunderlich notes the observed output electrical signals remain large at high temperatures and are linearly dependent on the degree of circular polarization of the incident light. The device therefore represents a realization of an electrically controllable solid-state polarimeter which directly converts polarization of light into electric voltage signals. He says future applications may exploit the device to detect the content of chiral molecules in solutions, for example, to measure the blood-sugar levels of patients or the sugar content of wine.
This work forms part of wider spintronics activity within Hitachi worldwide, which expects to develop new functionalities for use in fields as diverse as energy transfer, high-speed secure communications and various forms of sensor.

While Wunderlich acknowledges it is yet to be determined whether or not spin-based devices will become a viable alternative to or complement of their standard electron-charge-based counterparts in current information-processing devices, he says his team's discovery has shifted the focus from the theoretical academic speculation to prototype microelectronic device development.

"For spintronics to revolutionize information technology, one needs a further step of creating a spin amplifier," Sinova says. "For now, the device aspect -- the ability to inject, manipulate and create a logic step with spin alone -- has been achieved, and I am happy that Texas A&M University is a part of that accomplishment."


Neuroimaging Helps to Predict Which Dyslexics Will Learn to Read

Their work, the first to identify specific brain mechanisms involved in a person's ability to overcome reading difficulties, could lead to new interventions to help dyslexics better learn to read.

"This gives us hope that we can identify which children might get better over time," said Fumiko Hoeft, MD, PhD, an imaging expert and instructor at Stanford's Center for Interdisciplinary Brain Sciences Research. "More study is needed before the technique is clinically useful, but this is a huge step forward."

 Activity in the highlighted brain area, located in the right inferior frontal gyrus region, showed significant positive correlation with reading gains 2.5 years after they were initially examined in a group of youth with dyslexia.

Hoeft is first author of a paper, which will be published online Dec. 20 in the Proceedings of the National Academy of Sciences. The senior author is John Gabrieli, PhD, a former Stanford professor now at the Massachusetts Institute of Technology.

Dyslexia, a brain-based learning disability that impairs a person's ability to read, affects 5 to 17 percent of U.S. children. Affected children's ability to improve their reading skills varies greatly, with about one-fifth able to benefit from interventions and develop adequate reading skills by adulthood. But up to this point, what happens in this brain to allow for this improvement remained unknown.

Past imaging studies have shown greater activation of specific brain regions in children and adults with dyslexia during reading-related tasks; one area in particular, the inferior frontal gyrus (which is part of the frontal lobe), is used more in dyslexics than in typical readers. As the researchers noted in their paper, some experts have hypothesized that greater involvement of this part of the brain during reading is related to long-term gains in reading for dyslexic children.

For this study, Hoeft and colleagues aimed to determine whether neuroimaging could predict reading improvement and how brain-based measures compared with conventional educational measures.

The researchers gathered 25 children with dyslexia and 20 children with typical reading skills -- all around age 14 -- and assessed their reading with standardized tests. They then used two types of imaging, functional magnetic resonance imaging and diffusion tensor imaging (a specialized form of MRI), as the children performed reading tasks. Two-and-a-half years later, they reassessed reading performance and asked which brain image or standardized reading measures taken at baseline predicted how much the child's reading skills would improve over time.

What the researchers found was that no behavioral measure, including widely used standardized reading and language tests, reliably predicted reading gains. But children with dyslexia who at baseline showed greater activation in the right inferior frontal gyrus during a specific task and whose white matter connected to this right frontal region was better organized showed greater reading improvement over the next two-and-a-half years. The researchers also found that looking at patterns of activation across the whole brain allowed them to very accurately predict future reading gains in the children with dyslexia.

"The reason this is exciting is that until now, there have been no known measures that predicted who will learn to compensate," said Hoeft.

As the researchers noted in their paper, "fMRI is typically viewed as a research tool that has little practical implication for an individual with dyslexia." Yet these findings suggest that, after additional study, brain imaging could be used as a prognostic tool to predict reading improvement in dyslexic children.

The other exciting implication, Hoeft said, involves therapy. The research shows that gains in reading for dyslexic children involve different neural mechanisms and pathways than those for typically developing children. By understanding this, researchers could develop interventions that focus on the appropriate regions of the brain and that are, in turn, more effective at improving a child's reading skills.

Hoeft said this work might also encourage the use of imaging to enhance the understanding (and potentially the treatment) of other disorders. "In general terms, these findings suggest that brain imaging may play a valuable role in neuroprognosis, the use of brain measures to predict future reductions or exacerbations of symptoms in clinical disorders," she explained.

The authors noted several caveats with their findings. The children were followed for two-and-a-half years; longer-term outcomes are unknown. The study also involved children in their teens; more study is needed to determine whether brain-based measures can predict reading progress in younger children. Hoeft is now working on a study of pre-readers, being funded by the National Institute of Child Health and Human Development.

Hoeft and Gabrieli collaborated on the study with researchers from Vanderbilt University, University of York in England and University of Jyväskylä in Finland. Stanford co-authors include Gary Glover, PhD, professor of radiology, and Allan Reiss, MD, the Howard C. Robbins Professor of Psychiatry and Behavioral Sciences and professor of radiology and director of the Center for Interdisciplinary Brain Sciences Research.

The study was supported by grants from the National Institute of Child Health and Human Development, Stanford University Lucile Packard Children's Hospital Child Health Research Program, the William and Flora Hewlett Foundation and the Richard King Mellon Foundation.


A Way to Make the Smart Grid Smarter

New semiconductor-based devices for managing power on the grid could make the "smart grid" even smarter. They would allow electric vehicles to be charged fast and let utilities incorporate large amounts of solar and wind power without blackouts or power surges. These devices are being developed by a number of groups, including those that recently received funding from the new Advanced Research Projects Agency for Energy (ARPA-E) and the National Science Foundation. 

 Smart Transformer: A prototype of a smart solid-state transformer from the Electric Power Research Institute. It’s smaller and more versatile than today’s transformers. The module on the left converts high-voltage alternating current from the grid to direct current. On the right is an inverter that converts that power to the 120-volt AC that comes out of standard wall outlets. To the right of the outlets are two more power interfaces, one for 240-volt AC power and one for 400-volt DC.

As utilities start to roll out the smart grid, they are focused on gathering information, such as up-to-the-minute measurements of electricity use from smart meters installed at homes and businesses. But as the smart grid progresses, they'll be adding devices, such as smart solid-state transformers, that will strengthen their control over how power flows through their lines, says Alex Huang, director of a National Research Foundation research center that's developing such devices. "If smart meters are the brains of the smart grid," he says, "devices such as solid-state transformers are the muscle." These devices could help change the grid from a system in which power flows just one way—from the power station to consumers—to one in which homeowners and businesses commonly produce power as well. 

Today's transformers are single-function devices. They change the voltage of electricity from one level to another, such as stepping it down from the high voltages at which power is distributed to the 120- and 240-volt levels used in homes. The new solid-state transformers are much more flexible. They use transistors and diodes and other semiconductor-based devices that, unlike the transistors used in computer chips, are engineered to handle high power levels and very fast switching. In response to signals from a utility or a home, they can change the voltage and other characteristics of the power they produce. They can put out either AC or DC power, or take in AC and DC power from wind turbines and solar panels and change the frequency and voltage to what's needed for the grid. They have processors and communications hardware built in, allowing them to communicate with utility operators, other smart transformers, and consumers. 

The devices are so flexible that researchers are still working out how to make the best use of them. There are several possibilities. Today, charging an electric vehicle at home takes many hours, even if it's plugged into a special charger with 220/240-volt circuits rather than more common 110/120-volt outlets. Direct-current chargers can cut the time for charging a 24-kilowatt-hour pack like the one in the new Nissan Leaf from eight hours to just 30 minutes, but they're inefficient, wasting about 10 to 12 percent of the power that comes in to them. The new transformers could replace these special chargers, and they're more efficient, wasting only about 4 percent of the power, says Arindam Maitra, a senior project manager at the Electric Power Research Institute, which is developing smart transformers. What's more, because the transformers have communications and processing capability, if several neighbors plug in their cars to charge at the same time, the transformers can prevent circuits from being overloaded by slowing or postponing charging based on consumer preferences and price signals from the utility. The same devices can also be used to send DC power from solar panels to the grid, eliminating the need for some equipment currently used to convert the power from solar panels and leveling out fluctuations in their voltage that could otherwise cause the panels to trip off and stop producing electricity. 

As power consumers such as big-box stores start to install more solar panels and energy-storage devices, smart transformers could be key to integrating power from these sources and the grid, Maitra says. Storage systems and distributed energy can allow stores to decide when to draw power from the grid and when to send power back to it, depending on the price of electricity at a given moment. Smart transformers could coordinate this potentially rapid change from buying to selling power, while keeping the grid stable and preventing neighbors' lights from dimming. They could even allow people to buy electricity from their neighbors, Huang says. "If you plug in your electric car at night, you could charge it by negotiating with those in your neighborhood who have excess power," he says. "You actually pay him. You don't pay the utility." 

Other kinds of devices can do many of the same things, but the idea of coordinating a large number and variety of consumer-owned devices makes utilities nervous about their ability to keep the grid stable. The new transformers would simplify the system and be utility-owned, making it easier for grid operators to keep the lights on, Maitra says.

Another potential benefit of smart transformers—or what the Electric Power Research Institute is starting to call smart-grid interfaces—is saving energy. For one thing, they can set the voltage of electricity at any given time so that it is at the minimum level appliances need to perform properly. One recent study suggested that doing this could reduce power consumption in the United States by up to 3 percent, which is equivalent to several times as much power as is now generated by all solar panels in the U.S. Even larger energy savings could be seen if smart transformers supplied DC power rather than AC to servers in data centers. Ordinarily, the servers convert the AC to DC themselves—and they do it inefficiently. (Other inefficient conversions, too, are involved in the uninterruptable power supply.) A recent demonstration of such a system by Duke Energy, a large utility company, and the Electric Power Research Institute found that supplying DC could cut power consumption at data centers by about 15 percent.

Smart solid-state transformers are still in the development stage and likely are a few years away from being ready for market—researchers are still working on their efficiency and cost, for example. Taking advantage of their DC capability will require developing new construction standards for homes and businesses. Mark Wyatt, the vice president of smart-grid and energy systems at Duke Energy, cautions that solid-state transformers will need to be supplemented with other devices for controlling power on the grid, and they may not prove cost-effective in many areas. "It's not one size fits all," he says.

Yet in the long term, Huang says, smart transformers and other smart solid-state devices could enable an unprecedented amount of two-way power flow. "It could be revolutionary to how we construct the grid," he says.

By Kevin Bullis
From Technology Review

Sequencing a Single Chromosome

In the last three years, the number of human genomes that have been sequenced (their DNA read letter by letter) has jumped from a handful to hundreds, with thousands more in progress. But all of those genome readings lack some crucial information. A person inherits two copies of each chromosome, one maternally and one paternally. Existing sequencing methods do not indicate whether genetic variations that lie close to each other on the genomic map were inherited from the same parent, and therefore come from the same chromosome, or if some lie on the maternal chromosome and some on the paternal one. Knowing this has a variety of uses, from sequencing fetal DNA to more easily detecting the genes responsible for different diseases to better tracking human evolution.

 Capturing chromosomes: A microfluidics device designed by Stephen Quake and collaborators can capture a single chromosome, making it easier to analyze individual genomes.

Now two teams have devised ways to determine these groupings—known as the haplotype—in an individual. Stephen Quake and collaborators at Stanford University developed a way to physically separate the chromosome pairs and sequence each strand of DNA individually. Jay Shendure and colleagues at the University of Washington in Seattle sequenced DNA from single chromosomes in specially selected pools and used this information to piece together the genome. Both projects were published this week in Nature Biotechnology.

"It was a real technical flaw in the genomes [sequences] that have been published to date," says Quake, a bioengineer at Stanford who was one of Technology Review's top innovators under 35 in 2002. "Every genome we are going to do from now on going will be recorded with the haplotype."

Quake's team capitalized on microfluidics technology that they have developed for separating and analyzing single cells. First, the researchers trapped single cells during a specific phase of the cell cycle in which the two copies of its chromosomes are split apart. Then they burst open the cell, randomly partitioned chromosomes into different chambers on a microfluidics chip, and copied, or amplified, and analyzed the DNA in each chamber. 

Shendure, a TR35 winner in 2006, and his team amplified 40,000 letter stretches of DNA randomly sampled from individual chromosomes. Because each piece of DNA comes from one half of a chromosome pair, researchers know that all the genetic variants within its sequence lie on the same chromosome.

Shendure and Quake say that having haplotype information will have an enormous impact on human genetics, helping not only to diagnose and understand the genetic basis of some diseases but also to track the evolution of our species from primate ancestors.

If someone has two disease-linked mutations within a single gene, it's difficult to determine with current genome sequencing methods if there is one genetic mistake on the maternal copy and one on the paternal copy or if both variations lie within the same copy of the gene. In the former case, the person has two defective genes, which are likely to cause health problems. In the latter, the person has one good copy of the gene and one bad copy. In many cases, having the good copy can compensate for the defective one.

Cell division: A specialized microfluidics device first isolates a single cell (left). Chemicals digest the cell membrane, releasing the chromosomes (middle) and individual chromosomes are captured (right) in a chamber, where they are amplified and analyzed.

Haplotyping also makes it possible to determine a person's human leukocyte antigen (HLA) type, from immune genes that must be closely matched between donor and recipient in cases of bone marrow or organ transplant. "It's one of the most polymorphic [variable] parts of human genome," says Quake. Current methods to determine HLA type generate a list of variations but give no information about which of them lie on which chromosome. "If you don't keep track of this, you may not able to get a perfect match," says Quake. "We showed you can measure [the haplotype] and get information that in principle can be used for better matching for bone marrow transplants."

The technology might also be used to sequence fetal genomes from DNA collected from the mother's blood, in order to detect genetic abnormalities. (The DNA in the fetal blood is a mix of the mother's and the child's, making it particularly difficult to generate a whole genome sequence.) 

Beyond medicine, researchers say, haplotype information will aid research in population genetics, such as estimating the size and timing of human expansions and migrations. "You can capture diversity to higher resolution if you have individual chromosomes," says Nicholas Schork, a geneticist at the Scripps Research Institute who was not involved in either project and wrote a commentary on the research for Nature Biotechnology. "You lose a lot of information if you look at things at a genotype level versus a haplotype level." 

Researchers have been able to statistically infer haplotype for European populations, thanks to the fact that Europeans went through a genetic bottleneck thousands of years ago. (Haplotypes very gradually grow shorter, as the chromosome pairs break and reassemble with each generation. Europeans have long haplotypes that haven't yet broken down, making them easier to analyze.) But statistical techniques have not worked for African populations, meaning that genetic information for this group is much sparser. For this reason, most of the genome-wide association studies done to date have focused on European populations.

Both approaches add to the cost of genome sequencing, so it's not clear how quickly they will catch on, Schork says. "Shendure's approach is one people could likely implement in labs now," he says. Quake's approach generates much more complete data—a haplotype that is the length of an entire chromosome—but it is technically more challenging, requiring specialized chips to analyze the single cells. "Single cell sequencing and the ability to separate chromosomes in a dish is complicated," says Schork. "Unless someone builds an affordable assay, it won't be used routinely." Quake says that the chips that his lab and close collaborators use are currently being built at an academic foundry at Stanford. He says, "Perhaps there will be a commercial solution at some point." 

By Emily Singer
From Technology Review

Cell-Seeded Sutures to Repair the Heart

Over the last decade, scientists have experimented with using stem cells to heal or replace the scarred tissue that mars the heart after a heart attack. While the cells do spur some level of repair in animals, human tests have resulted in modest or transient benefits at best. Now researchers have developed a new kind of biological sutures, made from polymer strands infused with stem cells, that might help surmount two major obstacles to using stem cells to heal the heart: getting the cells to the right spot and keeping them there long enough to trigger healing. 

 Biological sutures: Hair-thin threads seeded with stem cells (marked in red and blue) could help heal the heart.

Scientists from the Worcester Polytechnic Institute, in Massachusetts, have shown that cells derived from human bone marrow, known as mesenchymal stem cells, can survive on the threads and maintain their ability to differentiate into different cell types after being sewn through a collagen matrix that mimics tissue. Preliminary tests in rats suggest that the technology helps the cells survive in the heart. 

"This is an out-of-the-box approach," says Charles Murry, a director of the Center for Cardiovascular Biology at the University of Washington, who was not involved in the study. "Putting cells on thread—once you hear it, it seems simple. But I've been in this field for 15 years, and I never thought of it." 

One major challenge has been to get an adequate number of cells to remain in the area of injury. For example, in human studies of injected mesenchymal stem cells, only one percent to about 10 percent of injected cells remained at the site after injection. "Presumably the cells will be much happier if they have something to adhere to than if you just put them in and left them to fend for themselves," says Murry.

Glenn Gaudette and collaborators at Worcester Polytechnic created the sutures with hair-thin threads made of fibrin, a protein polymer that the body uses to initiate wound healing and a common ingredient in tissue engineering. The microthread technology was developed by George Pins, associate professor of bioengineering at the institute. 

The strands are transferred to a tube filled with stem cells and growth solution; the tube slowly rotates, so the stem cells can adhere to the full circumference of the suture. Once populated by cells, the suture is attached to a surgical needle.

Cell-coated: The cells grow along polymer fibers, shown here in green.

About 10,000 mesenchymal cells can inhabit a two-centimeter length of bundled threads. Scientists can vary the size of the bundle, and the speed at which the material breaks down, depending on the application. 

"This new technique provides a wonderful tool for cell delivery for cardiac repair and for electrical problems as well, where you might want to create a new electrical path," says Ira Cohen, director of the Institute for Molecular Cardiology at Stony Brook University in New York. Cohen has collaborated previously with Gaudette but was not involved in this project. 

Gaudette's team is now studying the sutures in rats, to determine how long the cells remain at the injury site, and whether they can help heal tissue. One question that remains to be answered is whether the technology can be scaled up to deliver the hundreds of millions of cells needed to repair the heart wall.

While both animal and human studies show that mesenchymal cells can boost heart function, it's not clear how. The predominant idea is that the cells, rather than forming new tissue themselves, release growth factors and other molecules that spur the growth of new blood vessels. They may also signal resident cells to begin dividing in order to grow new tissue. 

Tissue engineers are developing a number of different methods for delivering stem cells to a wounded heart, including growing patches of beating heart muscle. But Gaudette hopes that biological sutures will prove more versatile than patches, and ultimately less invasive. Because of the threadlike structure, the material has the potential to be delivered via a catheter that passes through a vein. 

The research is also part of a larger trend to combine stem cells with tissue engineering and novel biomaterials to help cells grow more naturally and to improve their survival rate once implanted. "If you think of the heart as a damaged piece of material—a concept that I think is gaining traction—you're not going to want to randomly introduce cells," says Kenneth Chien, director of the Cardiovascular Research Center at Massachusetts General Hospital. "We want to force cells to go where we want and align the way we want." He likens this approach to that of a skilled tailor who repairs a sweater using the same thread and stitching as the existing material. 

While Gaudette's study focused on mesenchymal stem cells, other researchers are pursuing the same approach with other cell types, such as cardiac myocytes, which make up the heart's striated muscle. "Presumably you could make threads of vascular cells, cardiac muscle cells, or multiple cell types," says Murry. "The greater limitation comes to how big a hole you can make in the heart to drag through a cable of cells."

By Emily Singer
From Technology Review

A Cheaper Way to Clean Water

Oasys Water, a company that has been developing a novel, inexpensive desalination technology, showed off a new development facility in Boston this week. The company, which has been demonstrating commercial-scale components of its system in recent months, plans to begin testing a complete system early next year and to start selling the systems by the end of 2011.

 Low sodium: Jacob Roy, an employee at desalination startup Oasys Water, takes measurements at a new development facility in Boston.

Currently, desalination is done mainly in one of two ways: water is either heated until it evaporates (called a thermal process) or forced through a membrane that allows water molecules but not salt ions to pass (known as reverse osmosis). Oasys's method uses a combination of ordinary (or forward) osmosis and heat to turn sea water into drinking water. 

On one side of a membrane is sea water; on the other is a solution containing high concentrations of carbon dioxide and ammonia. Water naturally moves toward this more concentrated "draw" solution, and the membrane blocks salt and other impurities as it does so. The resulting mixture is then heated, causing the carbon dioxide and ammonia to evaporate. Fresh water is left behind, and the ammonia and carbon dioxide are captured and reused.

Oasys says the technology could make desalination economically attractive not only in arid regions where there are no alternatives to desalination, but also in places where fresh water must be transported long distances. In California, for example, a massive aqueduct system now transports water from north to south. 
"The cost will be low enough to make aqueduct and dam projects look expensive in comparison," says Oasys cofounder and chief technology officer Robert McGinnis, who invented the company's core technology. The process could also require substantially less power than other desalination options. "The fuel consumption and carbon emissions will be lower than those of almost any other water source besides a local lake or aquifer," he says.

The key to making the process work was developing a draw solution with easy-to-remove solutes, something that was done at a lab at Yale University. "Others have tried to develop other solutes for desalination," McGinnis says, "but they haven't been successful so far." 

The next-biggest technical challenge has been developing the membrane. The membranes used in reverse osmosis are unsuitable for this process because they work best at high pressures. Forward osmosis doesn't use high pressures, so water moves through these membranes too slowly for the system to be practical. McGinnis and colleagues reëngineered the membranes, reducing the thickness of the supporting material and increasing its porosity without changing a very thin layer that blocks salts. These changes enabled water to pass through 25 times faster, McGinnis says.

The system uses far less energy than thermal desalination because the draw solution has to be heated only to 40 to 50 °C, McGinnis says, whereas thermal systems heat water to 70 to 100 °C. These low temperatures can be achieved using waste heat from power plants. Thermal-desalination plants are often located at power plants now, but it takes extra fuel to generate enough heat for them. The new system, on the other hand, could run on heat that otherwise would have been released into the atmosphere. 

The Oasys system requires just one-tenth as much electricity as a reverse-osmosis system, McGinnis says, because water doesn't have to be forced through a membrane at high pressure. That's a crucial source of savings, since electricity can account for nearly half the cost of reverse-osmosis technology. Not working with pressurized water also decreases the cost of building the plant—there is no need for expensive pipes that can withstand high pressures. The combination of lower power consumption and cheaper equipment results in lower overall costs.
The Oasys system will not help everyone. For example, it is unlikely to do much for farmers; although they account for about 80 percent of fresh-water consumption, it wouldn't be cost-effective for them, in part because farms are often located closer to aquifers and other water supplies than are large coastal cities such as L.A. In addition, "there's a minimum amount of energy needed to strip salt ions out of water," says Peter Gleick, president of the Pacific Institute for Studies in Development, Environment, and Security in Oakland, California. "I don't think it will ever be cheap enough for irrigation." In agricultural areas where water is scarce, he says, it's cheaper to switch to better irrigation practices. 

As coastal cities grow, however, so will their need for desalination services, says Kenneth Herd, director of the water supply program at the Southwest Florida Water Management District. "It's not a matter of if," he says, "but a matter of when." 

By Kevin Bullis
From Technology Review

Another Smokin’ Hot Electric Superbike

We’re still waiting on the Mission One electric motorcycle, but that isn’t keeping the San Francisco startup from building a race-ready ride for the TTXGP electric motorcycle grand prix.

Mission Motors rolled into the International Motorcycle Show with a gorgeous e-moto that looks like something you might see from Ducati. That trellis frame is super sexy, Marchesini wheels always look hot and the specs on this beast are impressive.

“We are excited to announce the Mission R, our compact and powerful factory electric racebike,” founder Edward West said in a statement. “This bike represents the culmination of all the company’s learning in both electric powertrains and motorcycle engineering.”

A look at the numbers suggest this could be a formidable competitor in the TTXGP race series. The liquid-cooled AC induction motor is good for a claimed 141 horsepower and 115 foot-pounds of torque. The motor draws juice from a 14.4 kilowatt-hour battery, which is big for a bike when you consider the Mitsubishi i-MiEV has a 16 kWhr pack. The high-tech drivetrain features a Mission EVT 100 kilowatt motor controller with customizable throttle and regenerative braking maps. Mission claims the bike can hit 160 mph, which seems realistic considering the Mission One hit 150.059 at Bonneville.

The  range will depend upon how hard you’re flogging the bike, but Mission said it should be on par with the Mission One’s 150 miles. We’re going to assume you aren’t  riding like Valentino Rossi to get that kind of range. Drain  the pack and you’re looking at two hours to charge it at 220 volts.

The billet aluminum and chrome-moly steel chassis was designed by noted suspension designer James Parker. The motor is a stressed member and the battery pack — which uses a carbon-fiber casing — is a semi-stressed member. The rear swingarm is billet aluminum with an Öhlins shock. The fork also is from Öhlins. More top-shelf components include Marchesini forged magnesium wheels and Brembo billet brake calipers. The bodywork was designed by Tim Prentice, who designed the 2010 Triumph Thunderbird, among other things.

The bike weighs 545 pounds ready to ride. That’s 208 pounds heavier than a Ducati 1198. But Mission says the  best way to win races on an electric motorcycle  is to ensure you have plenty of power and energy storage, and to package it in a chassis designed specifically for the task. Mission promises that “because the heaviest parts of the powertrain have  been packaged tightly around the center of mass, the bike handles beautifully.”

We’ll have to take Mission’s word for it, because the Mission R is a one-off built for the TTXGP electric motorcycle grand prix series. You can’t have one.

Although the company competed in the inaugural race in 2009, it skipped the 2010 season when capital dried up after the economic implosion. That’s why the Mission One motorcycle, which we’d expected to see this year, has been delayed until, well, Mission hasn’t said. Meanwhile, Mission has launched Mission Electric Vehicle Technology, a sideline developing EV drivetrains for the automotive and motorcycle sectors. The Mission R uses hardware developed by Mission EVT, so clearly the bike is a platform for showing what it can do.

Mission says the R will hit the track in early 2011 and will compete in the TTXGP racing series along with other races, events, and demonstrations.


Holography With Electrons

A report is published in this week´s issue of Science.
Holography, as it is encountered in everyday life, uses coherent light, that is, a source of light where all the emitted light waves march in step. This light wave is divided into two parts, a reference wave and an object wave. The reference wave directly falls onto a two-dimensional detector, for example a photographic plate. The object wave interacts with and scatters off the object, and is then also detected. The superposition of both waves on the detector creates interference patterns, in which the shape of the object is encoded.

 Experimentally measured velocity map image for the ionization of metastable Xe atoms by 7 micrometer light from the FELICE laser. The image shows the velocity distribution of the ionised electrons along (horizontal) and perpendicular to (vertical) the polarization axis.

What Gábor couldn't do, to construct a source of coherent electrons, is commonplace in experiments with intense laser fields. With intense, ultra-short laser fields, coherent electrons can readily be extracted from atoms and molecules. These electrons are the basis for the new holography experiment, which was carried using Xe atoms. Marc Vrakking describes what happens: "In our experiment, the strong laser field rips electrons from the Xe atoms and accelerates them, before turning them around. It is then as if one takes a catapult and shoots an electron at the ion that was left behind. The laser creates the perfect electron source for a holographic experiment."

Some of the electrons re-combine with the ion, and produce extreme ultra-violet (XUV) light, thereby producing the attosecond pulses that are the basis for the new attosecond science program that is under development at MBI. Most electrons pass the ion and form the reference wave in the holographic experiment. Yet other electrons scatter off the ion, and form the object wave. On a two-dimensional detector the scientists could observe holographic interference patterns caused by the interaction of the object wave with the Coulomb potential of the ion.

In order to successfully carry out the experiments, certain conditions had to be met. In order to create the conditions for holography, the electron source had to be put as far away as possible from the ion, ensuring that the reference wave was only minimally influenced by the ion. The experiments were therefore carried out in the Netherlands, making use of the mid-infrared free electron laser FELICE, in a collaboration that encompassed -- among others -- the FOM Institutes AMOLF and Rijnhuizen. At FELICE, the Xe atoms where ionized using laser light with a 7 mm wavelength, creating ideal conditions for the observation of a hologram.

The ionization process produces the electrons over a finite time interval of a few femtoseconds. Theoretical calculations under the guidance of MBI Junior Group leader Olga Smirnova show, that the time dependence of the ionization process is encoded in the holograms, as well as possible changes in the ion between the time that the ionization occurs and the time that the object wave interacts with the ion. This suggests a big future promise for the new technique. As Vrakking states: "So far, we have demonstrated that holograms can be produced in experiments with intense lasers. In the future we have to learn, how to extract all the information that is contained in the holograms. This may lead to novel methods to study attosecond time-scale electron dynamics, as well as novel methods to study time-dependent structural changes in molecules."