Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........


Most Efficient Quantum Memory for Light Developed

The team at the ANU Research School of Physics and Engineering used a technique they pioneered to stop and control light from a laser, manipulating electrons in a crystal cooled to a chilly -270 degrees Celcius. The unprecedented efficiency and accuracy of the system allows the delicate quantum nature of the light to be stored, manipulated, and recalled.

Light passes through the crystal in the quantum memory experiment.

"Light entering the crystal is slowed all the way to a stop, where it remains until we let it go again," explains lead researcher Morgan Hedges. "When we do let it go, we get out essentially everything that went in as a three-dimensional hologram, accurate right down to the last photon.

"Because of the inherent uncertainty in quantum mechanics, some of the information in this light will be lost the moment it is measured, making it a read-once hologram. Quantum mechanics guarantees this information can only be read once, making it perfect for secure communication."

The same efficient and accurate qualities make the memory a leading prospect for quantum computing, which has the potential to be many times faster and more powerful than contemporary computing.

In addition, the researchers say the light storage will allow tests of fundamental physics, such as how the bizarre phenomenon of quantum entanglement interacts with of the theory of relativity.

"We could entangle the quantum state of two memories, that is, two crystals," says team leader Dr Matthew Sellars. "According to quantum mechanics, reading out one memory will instantly alter what is stored in the other, no matter how large the distance between them. According to relativity, the way time passes for one memory is affected by how it moves. With a good quantum memory, an experiment to measure how these fundamental effects interact could be as simple as putting one crystal in the back of my car and going for a drive."

Dr Sellars' team has previously performed an experiment that 'stopped' light in a crystal for over a second, more than 1,000 times longer than was previously possible. He said that the team is now bringing together systems that combine the high efficiency with storage times of hours.

The research team includes Dr Jevon Longdell from the University of Otago and Dr Yongmin Li from Shanxi University. The findings are published in Nature.

From sciencedaily.com

Test of Quantum Field Theory and Bose-Einstein Statistics of Photons: Bosons Aren't Fermions, Not Even a Little Bit

Seven years ago, University of California, Berkeley, physicists asked a fundamental and potentially disturbing question: Do bosons sometimes play by fermion rules? Specifically, do photons act like bosons all the time, or could they sometimes act like fermions?

 A beam of hot barium atoms exits an oven and passes through a collimator before hitting overlapping laser beams. If photons sometimes act like fermions, every once in a while a barium atom would absorb two photons and subsequently emit a flash of light. No significant events were observed

Based on the results of their experiment to test this possibility, published June 25 in the journal Physical Review Letters, the answer is a solid "no."

The theories of physics -- including the most comprehensive theory of elementary particles, Quantum Field Theory, which explains nature's electromagnetic, weak and strong nuclear forces (but not gravity) -- rest on fundamental assumptions, said Dmitry Budker, UC Berkeley professor of physics. These assumptions are based on how the real world works, and often produce amazingly precise predictions. But some physicists would like to see them more rigorously tested.

"Tests of (these assumptions) are very important," said Budker. "Our experiment is distinguished from most other experimental searches for new physics in that others can usually be incorporated into the existing framework of the standard model of particles and forces. What we are testing are some of the fundamental assumptions on which the whole standard model is based."

Among these assumptions is the boson/fermion dichotomy, which is mandated by the Spin-Statistics Theorem of quantum field theory. Bosons, which are governed by Bose-Einstein statistics, are particles with an intrinsic spin of 0, 1, 2 or another integer, and include photons, W and Z bosons, and gluons. The fermions, governed by Fermi-Dirac statistics, are all particles with odd-half-integer spins -- ½, 3/2, 5/2, etc. -- and include the electron, neutrinos, muons and all the quarks, the fundamental particles that make up protons and neutrons.

Bosons can pile on top of one another without limit, all occupying the same quantum state. At low temperatures, this causes such strange phenomena as superconductivity, superfluidity and Bose-Einstein condensation. It also allows photons of the same frequency to form coherent laser beams. Fermions, on the other hand, avoid one another. Electrons around a nucleus stack into shells instead of collapsing into a condensed cloud, giving rise to atoms with a great range of chemical properties.

"We have this all-important symmetry law in physics, one of the cornerstones of our theoretical understanding, and a lot depends on it," said Budker, who is also a faculty scientist at Lawrence Berkeley National Laboratory (LBNL). "But we don't have a simple explanation; we have a complex mathematical proof. This really bothered a lot of physicists, including the late Nobel laureate Richard Feynman."

"It's a shame that no simple explanation exists," said Budker, because it ties together basic assumptions of modern physics. "Among these assumptions are Lorentz invariance, the core tenet of special relativity, and invariance under the CPT (charge-parity-time) transformation, the idea that nature looks the same when time is reversed, space is reflected as in a mirror, and particles are changed into antiparticles. Lorentz invariance results from the entanglement of space and time, such that length and time change in reference frames moving at constant velocity so as to keep the speed of light constant.

"Another one of the assumptions of the spin-statistics theorem is microcausality," said UC Berkeley post-doctoral fellow Damon English. "A violation wouldn't exactly be the same type of paradox as travelling back in time to kill your great-grandfather, but more along the lines of receiving a flash of light before it was emitted."

In their experiment, Budker, English and colleague Valeriy Yashchuk, a staff scientist at LBNL's Advanced Light Source, were able to reduce the existing limit that photons act like fermions by more than a factor of a thousand.

"If just one pair of photons out of 10 billion had taken the bait and behaved like fermions, we would have seen it," English said. "Photons are bosons, at least within our experimental sensitivity."

In 1999, Budker, and David DeMille, now a professor of physics at Yale University, completed a similar preliminary experiment, conducted partially at Amherst College and partially at UC Berkeley, establishing that photons act as bosons, not fermions. The new experiment improves the precision of the Amherst/UC Berkeley experiment by a factor of about 3,000.

The experiment bombarded barium atoms with photons in two identical laser beams and looked for evidence that the barium had absorbed two photons of the same energy at once, thereby kicking an electron into a higher energy state. The particular two-photon transition the scientists focused on was forbidden only by the Bose-Einstein statistics of photons. If photons were fermions, the transition would "go like gang-busters," said English.

The experiment detected no such "fermionic" photons, establishing the distinctness of bosons and fermions, and validating the assumptions underlying Bose-Einstein statistics and Quantum Field Theory.
"Spacetime, causality, and Lorentz invariance are safe,…for now," English said.

Using the same tabletop experiment, they also observed for the first time that the spin of the nucleus can alter the atomic environment to allow the otherwise bose-statistics forbidden transition. The most common isotopes of barium, barium-138 and barium-136, have zero nuclear spin, so the electron levels are undisturbed and the two-photon absorption is impossible. Two other isotopes, barium-135 and -137, have a nuclear spin of 3/2, which creates a hyperfine splitting of the electron energy levels and enables still very weak, but detectable, two-photon absorption.

"We will keep looking, because experimental tests at ever increasing sensitivity are motivated by the fundamental importance of quantum statistics," said Budker. "The spin-statistics connection is one of the most basic assumptions in our understanding of the fundamental laws of nature."
The work was funded by the National Science Foundation.

From sciencedaily.com

Building a Substitute Pancreas for Diabetics

Implants containing specially wrapped insulin-producing cells derived from embryonic stem cells can regulate blood sugar in mice for several months, according to research presented this month at the International Society for Stem Cell Research conference in San Francisco. San Diego-based ViaCyte (formerly Novocell), which is developing the implant as a treatment for type 1 diabetes, is now beginning the safety testing required for approval from the U.S. Food and Drug Administration before human testing can start.

Replacing the pancreas: Insulin-producing cells (shown here marked in blue), derived from stem cells and encapsulated in a special membrane, might one day regulate blood sugar in type 1 diabetics.

"It's still a long road toward a treatment for diabetes, but in my mind they have made astonishing progress," says Gordon Weir, head of Islet Transplantation and Cell Biology at Joslin Diabetes Center, in Boston. But he cautions that taking the next step is likely to be tricky. The technology "tends to work well in rodents, but moving it to larger animals gets more complicated," says Weir, who is not involved with the company. "You need more cells, and we're guessing the immune system [reaction] is more complex."

In type 1 diabetes, the immune system attacks the insulin-producing beta cells of the pancreas, forcing patients to rely on injections of the hormone to regulate blood sugar. Transplants of pancreatic cells from cadavers to human patients have shown that this type of cell therapy can free type 1 diabetics from daily insulin injections. But the scarcity and variable quality of this tissue makes it an impractical therapy. For the last two decades, scientists have searched for alternative sources of cells, focusing in large part on cells from the pancreas of fetal or neonatal pigs. ViaCyte, which began its efforts more than 10 years ago, has focused on embryonic stem cells.

The research exemplifies the challenges of creating cell replacement therapies from embryonic stem cells. No such treatments yet exist and only one company has won FDA approval to begin human testing. That effort was put on hold last year due to safety concerns.

After years of research, ViaCyte developed a recipe capable of transforming embryonic stem cells into immature pancreatic cells, called progenitors. The recipe is a combination of three small molecules and five proteins, and it attempts to replicate what cells would experience in the developing embryo.

But scientists haven't yet been able to create fully "differentiated" beta cells in a dish. This is important because undifferentiated cells carry risk of turning cancerous. In a paper published in 2008, the company showed that transplanting the pancreatic progenitors into mice pushed these cells to fully differentiate inside the animal, enabling them to regulate blood sugar.

However, in some cases, the cells formed clumps of cancerous tissue called teratomas, a major safety concern with stem cell therapies. So in the new experiments, the scientists encased the cells in tea-bag like membrane. "Encapsulation protects cells from getting killed by the immune system and would contain teratoma cells," says Weir. Encapsulation also allows the cells to be removed, if needed, says Kevin D'Amour, a principal scientist at Viacyte who presented the research.

The inner layer of the membrane is small enough to prevent the cells from leaking out, but the outer layer is large enough to encourage blood vessels to grow along the membrane. The implanted cells need access to blood in order to sense and respond to changes in blood sugar, as well as to deliver the oxygen the cells need to survive.

While encapsulation protects the cells, it also introduces its own problems. "One of the fundamental challenges has been to identify materials that don't cause fibrosis, or scar tissue around material," says Dan Anderson, a chemical engineer at MIT. "That's particularly important here because [fibrosis] can starve cells of oxygen and inhibit their ability to respond to glucose." The company is currently using a prototype membrane from a company called Theracyte, but it is also working on its own customized version.

In the new research, scientists showed that animals whose own insulin producing cells were chemically destroyed could survive with the implant. "They have been completely controlled by the human graft for four months," says D'Amour. In fact, while mice typically have a higher resting blood glucose level than do humans, the animals with human insulin-producing cells had glucose levels that more closely resemble those of humans.

ViaCyte still has a number of issues to solve before its device can be tested in patients. It's not clear how the human immune system will react to the implants, an issue that ViaCyte is studying in collaboration with scientists at the University of California, San Francisco. For example, while the membrane is designed to protect the cells, patients may still require immunosuppressive drugs. Or the cells within the device may need to be tissue-matched to the recipient, much like whole organ transplants.

Living Cell Technologies, headquartered in Australia and New Zealand, has ongoing human tests of encapsulated pancreatic cells derived from pigs in Russia and New Zealand. While the results of the studies have not yet been published, reports from the company based on a small number of patients say the treatment so far appears safe and patients do not require immunosuppressant drugs.

By Emily Singer 
From Technology Review

Working Toward a Smarter, Faster Cloud

Cloud computing services have taken off in recent years. They give businesses flexible access to computing hardware and resources without all the burden of having to manage infrastructure themselves. But there are times when some customers might benefit a great deal from managing the underlying infrastructure. 

In a presentation this week at Usenix Annual Technical Conference '10 in Boston, Vytautas Valancius, a computer science researcher at Georgia Institute of Technology, described a system that would let cloud users customize the path their data takes as it travels through cloud computing platforms. Such a path could span multiple Internet service providers and networks. 

Different types of cloud applications have different needs, Valancius explained. For example, a highly interactive application such as a voice chat program probably needs a high-quality connection. In contrast, a file-backup service that transfers data in bulk might benefit from the least expensive transit between machines.
Today's cloud services, however, send data for both types of applications over the same path. To remedy this, Valancius has been working on a system called Transit Protocol that would let users set a path that matches the needs of a specific application. The work was done with Nick Feamster, an assistant professor at Georgia Tech, Jennifer Rexford, a professor at Princeton University, and Akihiro Nakao, an associate professor at the University of Tokyo.

Valancius says cloud computing companies could have a physical connection to a variety of ISPs, negotiate with each one, and then provide the Transit Protocol service to users. Transit Protocol would let a cloud provider handle these elements and then pass the benefits on to its customers through a specially designed interface.
Transit Protocol is currently deployed in sites in Atlanta, Princeton, NJ, and Madison, WI, and is being used to power a number of academic experiments. The interface requires the user to have advanced networking knowledge. However, Valancius says that future work on the system will make it easier to use. 

Currently, Transit Protocol's users have to do their own measurement and monitoring of ISPs to determine which one can provide the best service. Valancius wants to add tools that could do these measurements automatically and give users an overview. He envisions a simpler interface that might, for example, use this measurement data to let users set their preferences for how their data will travel. Transit Protocol would then translate those preferences to specific paths.

Transit Protocol could be added to a service such as Amazon's Elastic Compute Cloud as an optional feature for customers interested in controlling this aspect of their infrastructure. Valancius notes that big companies could also use the tool internally to vary the behavior of specific applications that run on different machines within their organization.

"As cloud platforms mature to host increasingly complex and demanding applications, customers will want a greater degree of flexibility and control over these resources," says Andrew Warfield, an adjunct professor of computer science at the University of British Columbia and technical director for storage and emerging technologies in the virtualization management division of Citrix Systems. 

Warfield says the researchers have not only described a new and interesting feature that could be added to cloud-based systems, they've also demonstrated that there are widespread opportunities to build infrastructure services that both live in and are used by the cloud. He hopes to see approaches similar to this in areas including storage and databases. 

By Erica Naone 
From Technology Review

A Simpler Route to Plastic Solar Cells

A simplified process for printing polymer solar cells could further reduce the costs of making the plastic photovoltaics. The method, which has been demonstrated on a large-area, roll-to-roll printing system, eliminates steps in the manufacturing process. If it can be applied to a wide range of polymer materials, it could lead to a fast and cheap way to make plastic solar cells for such applications as portable electronics, photovoltaics integrated into building materials, and smart fabrics.

Solar roller: This roll-to-roll printer is fabricating polymer solar cells in a lab at the University of Michigan. The clear plastic substrate is visible on the top left red-colored roller.

Polymer solar cells aren't as efficient as silicon ones in converting sunlight into electricity, but they're lightweight and cheap, a trade-off that could make them practical for some applications. And they're compatible with large-area printing techniques such as roll-to-roll processing. But manufacturing the solar cells is challenging, because if the polymers aren't lined up well at the nanoscale, electrons can't get out of the cell. Researchers now use post-printing processing steps to achieve this alignment. Eliminating these extra steps will, University of Michigan researchers hope, bring down manufacturing costs and complexity.

"Our strategy solves a number of issues at the same time," says L. Jay Guo, professor of electrical engineering at the University of Michigan. Their process involves applying a small amount of force during the printing process with a permeable membrane. The process allows the printing solvents to evaporate and leads to well-ordered polymer layers--without any need for post-processing. These improvements in the structure of the cell's active layer have an additional benefit: cells made using this technique require one fewer layer of materials than polymer solar cells made using other methods. This work is described online in the journal Advanced Materials.

When light of a certain wavelength strikes the semiconducting material in a solar cell, it creates electrons and positively charged holes. To generate an external current, the cell must separate the electrons from the holes so that they can exit. This separation doesn't happen as readily in polymers as it does in inorganic materials like silicon, says Guo. The active layers in polymer solar cells combine two materials, one that conducts holes and one that conducts electrons. Ideally the electron-accepting polymer would be on top of the electron-donating polymer, so that it's near the cathode, allowing as many electrons to exit as possible.

Guo's group found that spreading the polymer mix onto a plastic substrate, then pressing it against a roller coated with silicone, facilitates the formation of this desirable structure. And the pressure from the roller encourages the polymers to crystallize in a matter of seconds, without the need for the time-consuming chemical or thermal treatments. The structure of the polymers is so good, says Guo, that the Michigan researchers could eliminate a layer from the cells without any change in power-conversion efficiency.

So far, Guo has used a common but relatively low-efficiency polymers to fabricate the solar cells, but he says the method should be compatible with higher efficiency polymer materials. The Michigan cells have an efficiency of only about 3.5 percent. Researchers are working on materials sets that should bring the efficiencies of polymer solar cells up to 12 to 15 percent, a boost that's necessary if polymer solar cells are to reach a broad market and more fully compete with conventional silicon and thin-film cells. 

"I think this process has very strong potential," says Yang Yang, professor of materials science and engineering at the University of California, Los Angeles. "It's uncertain whether this method also works for other polymer systems, but there is no reason why it won't." Yang is collaborating with plastic solar-cell company Solarmer of El Monte, CA, which is on track to reach 10 percent efficiency with its devices by the end of this year.

By Katherine Bourzac 
From Technology Review

'Quantum Computer' a Stage Closer With Silicon Breakthrough

According to the research paper, the scientists have created a simple version of Schrodinger's cat -- which is paradoxically simultaneously both dead and alive -- in the cheap and simple material out of which ordinary computer chips are made.

 The electron orbits a phosphorus atom embedded in the silicon lattice, shown in silver. The undisturbed electron density distribution, calculated from the quantum mechanical equations of motion is shown in yellow. A laser pulse can modify the electron’s state so that it has the density distribution shown in green. Our first laser pulse, arriving from the left, puts the electron into a superposition of both states, which we control with a second pulse, also from the left, to give a pulse which we detect, emerging to the right. The characteristics of this "echo" pulse tell us about the superposition we have made.

"This is a real breakthrough for modern electronics and has huge potential for the future," explained Professor Ben Murdin, Photonics Group Leader at the University of Surrey. "Lasers have had an ever increasing impact on technology, especially for the transmission of processed information between computers, and this development illustrates their potential power for processing information inside the computer itself. In our case we used a far-infrared, very short, high intensity pulse from the Dutch FELIX laser to put an electron orbiting within silicon into two states at once -- a so-called quantum superposition state. We then demonstrated that the superposition state could be controlled so that the electrons emit a burst of light at a well-defined time after the superposition was created. The burst of light is called a photon echo; and its observation proved we have full control over the quantum state of the atoms."

And the development of a silicon based "quantum computer" may be only just over the horizon. "Quantum computers can solve some problems much more efficiently than conventional computers -- and they will be particularly useful for security because they can quickly crack existing codes and create un-crackable codes," Professor Murdin continued. "The next generation of devices must make use of these superpositions to do quantum computations. Crucially our work shows that some of the quantum engineering already demonstrated by atomic physicists in very sophisticated instruments called cold atom traps, can be implemented in the type of silicon chip used in making the much more common transistor."

Professor Gabriel Aeppli, Director of the London Centre for Nanotechnology added that the findings were highly significant to academia and business alike. "Next to iron and ice, silicon is the most important inorganic crystalline solid because of our tremendous ability to control electrical conduction via chemical and electrical means," he explained. "Our work adds control of quantum superpositions to the silicon toolbox."

From sciencedaily.com

Genetically Modified Cell Procedure May Prove Useful in Treating Kidney Failure

Indiana University School of Medicine researchers have successfully treated acute kidney injury in laboratory experiments using cells that were genetically reprogrammed to produce the protein. The research suggests there could be a potential future treatment using such cells delivered intravenously instead of surgically.

 This microscopic image demonstrates that modified cells infused to treat renal failure in rats have engrafted into the kidney. The engrafted cells have been tagged with green fluorescence, and all nuclei are labeled with a blue fluorescing dye to show the tissue architecture.

Hospital health care professionals must deal with such renal problems -- during which the kidneys cannot adequately perform their critical roles of removing bodily waste -- in about five percent of all patients, and a much higher percentage of patients in intensive care units.

IU scientists Jesus Dominguez, M.D., and Katherine Kelly, M.D., report in the August issue of the American Journal of Physiology -- Renal Physiology, available online, that they were able to treat acute kidney failure in animal models using cells modified to produce a protein that normally is found when kidneys first develop in embryos. That protein, called SAA, also is produced by the liver in periods of bodily stress, such as during infections, fever or surgery.

In earlier research they had found that applying the SAA protein directly to kidney cells caused those cells to produce tubules like those found in normal kidneys to remove waste products from the blood. Tests determined that the tubules were functional.

The next step was to test whether the protein could have a similar impact in living animals. However, the protein is not easily available, so the researchers modified kidney cells to produce the protein. When the cells were infused into rats with renal failure, their kidney function improved quickly and significantly, the researchers found.

"In other studies, protecting the kidney usually doesn't work after the injury has begun," said Dr. Kelly, assistant professor of medicine. "But this is a significant degree of protection of the kidney, especially for something given after the fact."

In addition, using images from the Indiana Center for Biological Microscopy, the researchers discovered that the infused cells integrated themselves into the kidneys, which showed much less physical damage than the untreated kidneys.

"Now we know that cells can be modified and made to integrate into the kidney from a long distance. So we can give the cells from a peripheral vein and they will go to the kidney and build up, from the inside out. So our model seems to be developing into a transplant model," said Dr. Dominguez, professor of medicine and a physician at the Roudebush VA Medical Center.

The research was support by grants from the U.S. Department of Veterans Affairs and the Clarian Health Values Fund.

From sciencedaily.com

America's Broadband Dilemma

For millions of people around the world, broadband Internet access is big part of modern life. We download movies and music, play online games, share photos, and upload information to social-networking sites--all at ever-increasing speeds. Rates of at least 50 megabits per second--enough to download a DVD-quality movie in about 10 minutes--have become mainstream in cities from Seoul to Stockholm. In the United States, however, the broadband landscape is different: the average download speed is about 10 megabits per second, according to the broadband testing firm Ookla, and only 23 people in 100 have broadband subscriptions, according to the International Telecommunications Union (see "The Global Broadband Spectrum"). Statistics from the Organization for Economic Coöperation and Development rank the United States behind more than a dozen other countries--including South Korea, Japan, Canada, the U.K., Sweden, and Belgium--in both broadband penetration and average advertised speed.


Faced with these statistics--and the widespread assumption that access to high-speed broadband is critical to the country's economic health--the U.S. Federal Communications Commission created the National Broadband Plan, which directs up to $15.5 billion in public funds toward improving U.S. connectivity. The plan aims not only to ensure affordable and reliable broadband for every community but also to equip the majority of households (some 100 million homes) with lines running at speeds of at least 100 megabits per second. It's an attempt to shove the United States into the high-speed age--and all of it, the FCC suggests, is achievable by 2020.

Yet the case for federal investment boils down to one fundamental question: how much public good will $15.5 billion buy? Answering that with any accuracy, it turns out, is nearly impossible. Serious studies on the economic and social effects of wider, faster Internet access are surprisingly few--and many of those that do exist are dated. The answer also depends on how the FCC balances the plan's two goals of inclusivity and innovation.

It is often taken for granted that greater access to high-speed Internet services will boost the economy while improving health care, education, civic engagement, and more. Such assumptions are built into the plan and endorsed by various experts. In March of last year, for example, the Information Technology and Innovation Foundation, a think tank in Washington, DC, published a study suggesting that government support for wired and wireless broadband is vital to future economic development. The benefits of increased Internet access, the report's authors suggested, in turn spur the growth of new networked technologies as well as wholly unforeseen developments.

But Shane Greenstein, a professor of management and strategy at Northwestern University's Kellogg School of Management, says the advantages are in fact far less obvious. "The research challenge is substantial," says Greenstein, one of a handful of academics who have studied the economic impact of broadband. "One problem is that the real impact generally doesn't come in the sector where the investment takes place. For example, when broadband first arrived, who knew that restructuring the music industry would be the first thing to happen?" A lack of empirical research, Greenstein suggests, is also the result of a kind of institutional blindness apparent on nearly every side: "It's in no one's interest to be a skeptic, because it undermines one of the mythologies of broadband--that it is a technological panacea."

A 2007 study by researchers at MIT and at another Washington think tank, the Brookings Institution, did find some benefits when it attempted to discern the effect of increased broadband penetration on job opportunities at a state level. More high-speed access barely seemed to change activity in sectors such as construction, but it appeared to improve opportunities in knowledge industries like finance, education, and health care. The researchers determined that each percentage-point increase in broadband penetration was associated with an overall employment increase of 0.2 percent to 0.3 percent. William Lehr, an economist at MIT and one of the paper's authors, says that the returns diminish as penetration nears the FCC's target of 100 percent, but the advantages remain significant.

It's less clear, however, whether the superfast connections championed by the FCC offer benefits commensurate with the costs. A study published last year by Motu Economic and Public Policy Research, a nonprofit institute in New Zealand, found that increasing broadband speeds may not help businesses' bottom lines. The research, which examined 6,000 companies, found that productivity increased significantly when service was upgraded from narrowband to broadband; but there was "no discernible additional effect arising from a shift from slow to fast broadband."

Nonetheless, the FCC does want commercial speeds to increase rapidly, to help the United States keep pace with rival countries. At the same time, the National Broadband Plan articulates a social and moral imperative to make sure that everybody can access the same basic services and operate in a 21st-century economy. Indeed, the plan often conflates these two goals. But the truth is that they require different technical approaches, political policies, and levels of investment. 

The challenge, then, is to work out how to balance inclusion with innovation on a budget of $15.5 billion. Guidelines that the European Commission published last October after reviewing projects across the continent suggest that heavy government investment is the best way to extend broadband to underserved areas. That could be expensive, however. Northwestern's ­Greenstein points out that bringing broadband to the farthest reaches of the United States will come at rapidly increasing costs--as much as $5,000 per household in some rural parts of the country, compared with a couple of hundred dollars in built-up areas.

Instead, the FCC recommends a fresh approach: freeing up new portions of the wireless spectrum and encouraging mobile broadband providers to fill the gaps in coverage. Some academics have endorsed this solution, but wireless broadband technologies are unproved. WiMax, for example, can deliver speeds of 50 megabits per second, but that speed declines with distance from the transmitting station.

Achieving ever-higher speeds may also require significant government intervention. According to a study by Harvard University's Berkman Center for Internet and Society, the key to speeding up existing service is an initial government investment combined with policies that encourage competition. Municipal investment in fiber networking capacity, which is then leased to commercial parties, has led to increased broadband speeds in European cities such as Amsterdam and Stockholm. But it will be tricky to follow this path in the United States, because the industry has been set up differently. In the 1990s, the FCC decided to encourage competition between infrastructures such as cable and DSL, rather than between different providers using a shared backbone. As a result, companies in the United States have been less willing to shoulder the cost of investing in new broadband technology, since doing so would give them little advantage over any direct competitor.

Yet private investment will be vital to increasing broadband speeds. Korea's rapid advances in the past decade were attributable in no small part to funding from companies. And there are encouraging signs that something similar is beginning to happen in the United States; in the most obvious example, Google is planning to roll out one-gigabit-per-second fiber lines to at least 50,000 and up to 500,000 people in trial locations across the country. But an FCC task force has suggested that more than $300 billion of private funding would eventually be required on top of the government's expenditure in order to boost speeds to 100 megabits per second or higher for the entire nation.

In the end, the success of the National Broadband Plan will be judged according to the targets that the FCC has set for increasing both penetration and speeds by 2020. But the tension between those two distinct goals leaves a desirable outcome very much in doubt. 

Bobbie Johnson is a freelance writer based in San Francisco. Previously, he was the technology correspondent for the Guardian newspaper.
By Bobbie Johnson

Cell Transplants for Macular Degeneration

Rats genetically engineered to lose their sight can be protected from blindness by injections of human neural stem cells, according to research presented at the International Society for Stem Cell Research conference in San Francisco last week. StemCells, a startup in Palo Alto, CA, plans to use the positive results to file for approval from the U.S. Food and Drug Administration to begin human trials. The company is already testing the cells in children with a rare, fatal brain disorder called Batten's disease.

Reviving the retina: Human neural stem cells injected into the retina of rats that were engineered to go blind form a layer of tissue (purple) between the animals’ photoreceptors (blue) and retinal pigment epithelium (black), which typically nourishes photoreceptors. A startup called StemCells aims to begin human testing of the cells for retinitis pigmentosa and macular degeneration, two degenerative diseases that cause blindness.

The company's cells are isolated from human fetal tissue and then grown in culture. To determine whether these cells can protect against retinal degeneration, scientists studied rats that were genetically engineered to progressively lose their photoreceptors--cells in the retina that convert light into neural signals. These animals are commonly used to model macular degeneration and retinitis pigmentosa, two major causes of blindness that result from cell loss in the retina. Researchers injected about 100,000 cells into the animals' eyes when the rats were 21 days old. According to Alexandra Capela, a scientist at StemCells who presented the work, the cells migrate over time, forming a layer between the photoreceptors and a layer of tissue called the retinal pigment epithelium, cells which nourish and support the photoreceptors. 

Using electrodes implanted into the visual system, scientists measured the lowest levels of light the rats could detect. They found that the cells protected vision in the part of the retina in which they were implanted. They also tested the animals' acuity by examining the maximal speed at which they followed a series of moving bars, a natural rat reflex. "The treated animals maintain a high level of visual acuity, while the untreated animals decline steadily," said Capela. 

The implanted cells don't actually develop into new photoreceptors; in fact, they appear to maintain their undifferentiated state. So it's not clear how they protect against blindness. "The neuroprotective effect in the rats is interesting, but the mechanism is still pretty obscure," says Thomas Reh, a neuroscientist at the University of Washington, in Seattle, who was not involved in the study. 

Raymond Lund, a scientist at the Casey Eye Institute at Oregon Health Sciences University who collaborated on the study, says the cells "seem to somehow bypass the defect without actually correcting it." This may be because the cells make growth factors known to keep damaged cells alive, says Lund, who has also tested the cells in a different animal model of blindness. Another hypothesis is that the cells help clear cellular debris that builds up in the retinas of these rats and harms the photoreceptors.

While the cells seem to survive for months in mice, it's not yet clear how long they will survive in humans, who live much longer lives, or whether they will affect the long-term function of the retina in other ways. For example, they might interfere with the interaction between the photoreceptors and the retinal pigment epithelium, says Reh. 

Because the stem-cell treatment doesn't replace lost cells, it most likely needs to be administered early in the course of a disease. "This would not be something for advanced macular degeneration, where the receptors are already damaged," says Lund. "We would want to spot patients who are seriously as risk and hopefully slow or stop process of disease." A number of genetic factors have been identified that boost risk for the disease, and genetic testing is available. 

Taken together, retinitis pigmentosa and macular degeneration are the most common causes of blindness in people 40 and older. No treatments yet exist for the most common form of macular degeneration, which accounts for about 90 percent of cases. A number of novel therapies are under development, including drugs, cell transplants, and implanted devices. Advanced Cell Technology, based in Worcester, MA, uses human embryonic stem cells to grow retinal pigment epithelium, often the first cell type to die off in age-related macular degeneration and other eye diseases. The company filed for permission to begin clinical trials of the cells last November.

By Emily Singer 
From Technology Review

Startup Aims for Perfect Pixels

The race is on to build the perfect e-reader and tablet display. It needs to be easy on the eyes for e-reading, bright and beautiful for playing video, and efficient enough to lasts for days on a single battery charge.

MEMS the word: The Pixtronix display, seen here attached to a circuit board, uses MEMS shutters and a colored backlight to produce color video.

An Andover, MA, startup called Pixtronix hopes it has the right combination of technology and business plan to bring such screen to market. Like a liquid crystal display, Pixtronix's display uses a backlight, but unlike most LCDs it also reflects ambient light, allowing for an easier-to-read monochrome e-reader mode. The pixels in the display are made of tiny silicon shutters: micro-electromechanical systems (MEMS) that open and close to emit red, blue, and green light in rapid sequence, creating the illusion of a range of colors.

Unlike most other display technologies, there are no filters, polarizing films, or liquid crystals for light to pass through in the Pixtronix system. This means the backlight needs to be much less intense, using a quarter of the power that standard LCDs use,says Nesbitt Hagood, founder, president and CTO of Pixtronix. In Pixtronix display, color is produced by the flickering colored backlight in combination with shutters opening and closing. When the shutters are open, ambient light reflects within the MEMS structure to amplify the color, says Hagood. Turn off the backlight, and an open shutter produces a whitish-gray pixel. When the shutter is closed, the pixel is black.

A Pixtronix display differs slightly from another up-and-coming MEMS display technology, called mirasol, from Qualcomm. In this display, pixels are made of MEMS light chambers with movable, reflective surfaces that cause light waves to interfere with each other. Color is determined by the distance between the reflective surfaces. Mirasol is an extremely low-power display because it doesn't use a backlight at all, but its video quality is currently somewhat grainy. Another display startup, called Unipixel, has developed shutter technology somewhat similar to that of Pixtronix. A backlight and thin polymer film shutters produce both color images and video. In May, the technology licensing firm Rambus acquired a portion of Unipixel's intellectual property.

Pixtronix hopes to license its technology to LCD manufacturers, which could adapt the equipment used to make LCDs to produce the screens MEMS shutters. "Billions and billions of dollars have been spent developing relatively mature [LCD] manufacturing facilities to get nice looking, high-yield displays," says Hagood. "If you're going to have a competitive product in the marketplace, you have to leverage that investment." 

Pixtronix isn't the only company trying to do this. Pixel Qi, which spun out of the One Laptop Per Child Project, is also building displays in LCD facilities. But whereas Pixel Qi has redesigned the components of LCDs--layers of optical polarizers, filters, and liquid crystals--to produce a display with both backlit-color and e-reader modes, Pixtronix has done away with all of the components of an LCD except for the backlight and the layer of transistors on glass that control the pixels.

Paul Semenza, an analyst for DisplaySearch, a technology research firm, says Pixtronix's approach is relatively simple compared to LCD technology. But he notes that it is tough for novel technologies to break into the display market to break into. "LCD makers have a track record of beating back innovations that were thought to be 'better' than LCD," he says. "It's a little hard to say yet whether it will succeed."

Another hurdle to adoption, says Semenza is something called "color breakup," in which the red, blue, and green colors appear to separate out, instead of blending together to produce a single color. Some people are more sensitive to this effect, which can occur with MEMS displays.

According to Hagood, Pixtronix has developed an algorithm that determines how fast to sequence pixel colors to minimize color breakup. "So far, people are pretty happy," he says. "Image quality isn't going to be the challenge."

Hagood adds that the biggest challenge will be the same as it was for LCD makers in the early days of that technology: getting high yields with low-cost manufacturing. He expects the first displays with Pixtronix technology to be in products by late 2011. Then users can judge whether the perfect tablet screen has truly arrived.

By Kate Greene 
From Technology Review

Flexible Touch Screen Made with Printed Graphene

Graphene, a sheet of carbon just one atom thick, has spectacular strength, flexibility, transparency, and electrical conductivity. Spurred on by its potential for application in new devices like touch screens and solar cells, researchers have been toying with ways to make large sheets of pure graphene, for example by shaving off atom-thin flakes and chemically dissolving chunks of graphite oxide. Yet in the thirty-some years since graphene's discovery, laboratory experiments have mainly yielded mere flecks of the stuff, and mass manufacture has seemed a long way away.

See through: Researchers have created a flexible graphene sheet with silver electrodes printed on it (top) that can be used as a touch screen when connected to control software on a computer (bottom).

"The future of the field certainly isn't flaking off pencil shavings," says Michael Strano, a professor of chemical engineering at MIT. "The large-area production of monolayer graphene was a serious technological hurdle to advancing graphene technology."

Now, besting all previous records for synthesis of graphene in the laboratory, researchers at Samsung and Sungkyunkwan University, in Korea, have produced a continuous layer of pure graphene the size of a large television, spooling it out through rollers on top of a flexible, see-through, 63-centimeter-wide polyester sheet.
"It is engineering at its finest," says James Tour, a professor of chemistry at Rice University who has been working on ways to make graphene by dissolving chunks of graphite. "[People have made] it in a lab in little tiny sheets, but never on a machine like this."

The team has already created a flexible touch screen by using the polymer-supported graphene to make the screen's transparent electrodes. The material currently used to make transparent electronics, indium tin oxide, is expensive and brittle. Producing graphene on polyester sheets that bend is the first step to making transparent electronics that are stronger, cheaper, and more flexible. "You could theoretically roll up your iPhone and stick it behind your ear like a pencil," says Tour. The Korean team built on rapid advances in recent months. "The field really has advanced in the past 18 months," says Strano. "What they show here is essentially a monolayer over enormous areas--much larger than we've seen in the past."

Roll and reel: A freshly made sheet of graphene is transferred onto a polyester sheet as it passes between hot rollers.

Last year, Rodney Ruoff and his team at the University of Texas in Austin showed that graphene could be grown on copper foil. Carbon vaporized at 1,000 degrees would settle atom-by-atom on the foil, which was a few centimeters across. Byung Hee Hong, a professor at Sungkyunkwan University and corresponding author on the paper, says the use of a flexible base presented a solution to the graphene mass-manufacturing dilemma.

"[This] opened a new route to large-scale production of high-quality graphene films for practical applications," says Hong. "[Our] dramatic scaling up was enabled by the use of large, flexible copper foils fitting the tubular shape of the furnace." And the graphene sheets could get even bigger. "A roll-to-roll process usually allows the production of continuous films," says Hong.

In Hong's method, a sheet of copper foil is wrapped around a cylinder and placed in a specially designed furnace. Carbon atoms carried on a heated stream of hydrogen and methane meet the copper sheet and settle on it in a single uniform layer. The copper foil exits the furnace pressed between hot rollers, and the graphene is transferred onto a polyester base. Silver electrodes are then printed onto the sheet.

The technique shows some potential to be scaled up for mass production. "They particularly show that they are able to grow the graphene [in a way] that is compatible with manufacturing," says Strano. "It's a very economical way to manufacture materials."

Hong sees application for the method in the production of graphene-based solar cells, touch sensors, and flat-panel displays. But he says products will be a while in coming. "It is too early to say something about mass production and commercialization," he says. Current manufacturing processes for indium tin oxide use a spreading technology that is different from roll-to-roll printing. "However, the situation will be changed when bigger flexible-electronics markets are formed in the near future," Hong says.

By Nidhi Subbaraman
From Technology Review

Nanotubes Give Batteries a Jolt

A lithium-ion battery with a positive electrode made of carbon nanotubes delivers 10 times more power than a conventional battery and can store five times more energy than a conventional ultracapacitor. The nanotube battery technology, developed by researchers at MIT and licensed to an undisclosed battery company, could mean batteries that extend the range of electric vehicles and provide longer periods without recharging for electronic gadgets, including smartphones.

Nano power: The pores between the nanotubes in this transmission- electron microscopy image can store lithium ions in a high-power battery.

Researchers have been trying to make electrodes for lithium-ion batteries from carbon nanotubes because their high surface area and high conductivity promise to improve both energy and power density relative to conventional forms of carbon. But working with the material has proved challenging--most methods for assembling carbon nanotubes require a binding agent that brings down the conductivity of the electrode, and lead to the formation of clumps of the material, reducing the surface area. The electrodes made by the MIT group, however, have a very high surface area for storing and reacting with lithium. This high surface area is critical both to the high storage capacity of the electrodes, as well as their high power: because lithium is stored on the surface, it can move in and out of the electrode rapidly, enabling faster charging and discharging of the battery.

The key to the performance of the MIT electrodes is an assembly process that creates dense, interconnected, yet porous carbon-nanotube films, without the need for any fillers. The group, led by chemical engineering professor Paula Hammond and mechanical engineering professor Yang Shao-Horn, create water solutions of carbon nanotubes treated so that one group is positively charged and the other is negatively charged. They then alternately dip a substrate, such as a glass slide, in the two solutions, and the nanotubes, attracted by differences in their charge, cling to one another very strongly in uniform, thin layers. The researchers had previously demonstrated that when heated and removed from the substrate, these dense yet porous films could store a lot of charge and release it quickly--acting like an electrode in an ultracapacitor.

Now the MIT group has adapted these methods to make battery electrodes. Lithium-ion batteries are charged and discharged when lithium ions move from one electrode to the other, driving or being driven by an external current. The more total lithium the battery can store, the greater its total energy storage capacity. The faster the ions can move out of one electrode and into the other, the greater its power. In work published this week in the journal Nature Nanotechnology, the MIT group showed that lithium ions in a battery electrolyte react with oxygen-containing chemical groups on the surface of the carbon nanotubes in the film. Because of the huge surface area and porous structure of the nanotube electrodes, there are many places for the ions to react, and they can travel in and out rapidly, which gives the nanotube battery high energy capacity and power, says Shao-Horn.

"This work has demonstrated once more that the development of methods for careful structural control at the nanoscale leads to major improvements in materials performance," says Nicholas Kotov, professor of chemical engineering at the University of Michigan. "I believe that it's just the beginning of the major improvement of lithium batteries using a materials engineering approach."

The next step, says Hammond, is to "speed things up." Using the dipping method, the group is able to make relatively thick nanotube films, but it takes a week. "If you want to make a car battery, you need to make it thicker, and over large areas," says Hammond. Instead of dipping a substrate in the two nanotube solutions, Hammond's group is now making the electrodes in a few hours by alternately spraying dilute mists of the two nanotube solutions. A major advantage of this misting method is that it's compatible with large-area printing processes that promise speed and compatibility with a wide range of substrates. For example, nanotube batteries might be printed directly onto integrated circuits.

By Katherine Bourzac 
From Technology Review

Drug Targets Lupus by Tricking Immune System

es, others develop heart disease, and some suffer kidney damage that can endanger their lives. And it still isn't clear what causes all these symptoms.

Now two companies are working together to attack the disease with an experimental drug that tricks the immune system into behaving more normally. Results from phase II trials of the drug, called Lupuzor, showed a 53% improvement in symptoms among patients on the drug compared to a 36% improvement in those on placebo. That was enough to generate a tremendous amount of excitement leading up to the 9th International Congress on Systemic Lupus Erythematosus in Vancouver, British Columbia, which begins June 24. Lupuzor's developers--ImmuPharma of London, U.K., and Cephalon of Frazer, PA--plan to release more detailed data from the trial during the conference.

Many drugs commonly used to treat autoimmune diseases, such as chemotherapy drugs, are known as immunosuppressants because they effectively shut down the entire immune system. Lupuzor, by contrast, is an immunomodulator--a drug that targets the specific immune cells involved in the disease. "In trials to date, we haven't seen suppression," says Peter Brown, vice president of clinical pharmacology and experimental medicine at Cephalon. That's important because disabling the immune system entirely can cause unwanted side effects, such as dangerous infections. 

Lupuzor targets T and B cells. In lupus, these immune cells malfunction, generating antibodies against proteins that healthy immune systems would normally ignore. Scientists at France's National Center for Scientific Research split those proteins into smaller fragments and tested whether they might reverse the wayward immune response. One peptide was particularly effective in vitro, and in mouse models of lupus it significantly extended life. ImmuPharma, founded in 1999, led the early studies; Cephalon licensed Lupuzor in 2009 and is now in the early stages of planning a phase III trial.

The Vancouver conference should give physicians and scientists a more complete picture of how Lupuzor stacks up to other lupus drugs in development. Lupus experts have been paying particularly close attention to Human Genome Sciences' Benlysta, a monoclonal antibody that targets B cells. The company is currently in phase III testing and is expected to release further details about its phase III results at the conference. So far, results have been impressive, with patients reporting significant reductions of "flares," or bouts of debilitating symptoms.

No new drugs to treat lupus have been approved in over 50 years. Physicians currently have precious few choices for lupus treatment: chemotherapy, steroids that can control inflammation, and a malaria drug that works in some patients. But the side effects of harsher treatments can be so severe that "as many patients die from that as those who die from lupus," says Tammy Utset, associate professor at the University of Chicago Medical Center who was also an investigator in the Benlysta trials.

Much of the challenge in developing lupus drugs stems from the variability of the disease. Some patients suffer flares frequently, for example, while others might get them only once a year. And the symptoms can strike many different organs. That makes measuring the response to an experimental drug difficult, says Sandra Raymond , CEO of the Lupus Foundation of America. To standardize these measurements, Human Genome Sciences worked closely with the U.S. Food and Drug Administration to develop a composite index that measures response to Benlysta according to several different parameters, including the number of both severe and moderate flares. The index also includes global assessments by treating physicians of whether or not the patients are getting worse. "What that's done is carved a pathway for other companies, because they've put together an index that really gets at how the patient is feeling," Raymond says.

Cephalon has yet to announce the timing of its phase III program or details about how it will measure patient response to Lupuzor in those trials. Still, the fact that there are any products at all in late-stage testing is a welcome relief to physicians like Kyriakos Kirou, a rheumatologist at the Mary Kirkland Center for Lupus Care at Hospital for Special Surgery in New York. "The drugs we have now are nonspecific and not very potent," Kirou says. "We need to strike the right balance in controlling the immune system, and hopefully these new medicines will do a better job of that. It's a very exciting time in lupus." 

By Arlene Weintraub
From Technology Review

Astronomers catch moment of star's birth

What could be the youngest known star has been photographed in the earliest stages of being born. Not yet fully developed into a true star, the object is in the earliest stages of formation and has just begun pulling in matter from a surrounding envelope of gas and dust, according to a new study in the Astrophysical Journal.

The study’s authors found the object using the Submillimeter Array in Hawaii and the Spitzer Space Telescope. Known as L1448-IRS2E, it’s located in the Perseus star-forming region of our Milky Way galaxy, about 800 light years away.



The team reckons it's in between the prestellar phase, when a particularly dense region of a molecular cloud first begins to clump together, and the protostar phase, when gravity has pulled enough material together to form a dense, hot core out of the surrounding envelope.

"It’s very difficult to detect objects in this phase of star formation, because they are very short-lived and they emit very little light," said Xuepeng Chen, a postdoctoral associate at Yale and lead author of the paper.
Most protostars are at least as luminous as the sun, with large dust envelopes that glow at infrared wavelengths. Because L1448-IRS2E is less than one tenth as bright as this, the team believes it's too dim to be considered a true protostar. 

Yet they also discovered that the object is ejecting streams of high-velocity gas from its center, confirming that some sort of preliminary mass has already formed and the object has developed beyond the prestellar phase. This kind of outflow is seen in protostars as a result of the magnetic field surrounding the forming star, but has never before been seen at such an early stage.

"Stars are defined by their mass, but we still don’t know at what stage of the formation process a star acquires most of its mass," said Héctor Arce, assistant professor of astronomy at Yale and an author of the paper. "This is one of the big questions driving our work."

Emma Woollacott
From tgdaily.com

How DNA Is Copied Onto RNA Revealed Through Three-Dimensional Transcription Film

Transcription involves about fifty regulatory molecules that interact with each other to begin reading the gene at the right place and the right time. The slightest irregularity of one of these molecules disturbs the transcription. An understanding of the initiation and regulation mechanisms is essential in order to understand gene expression. The structural biology researchers at IGBMC are studying molecular structures to gain a better understanding of how they function. Patrick Schultz's team is particularly focusing on the architecture of the molecules involved in transcription and attempting to decode the mechanisms of their interactions.

 Gene expression takes place in two stages: the transcription of DNA to RNA by an enzyme called RNA polymerase, , followed by the translation of this RNA into proteins, whose behaviour affects the characteristics of each individual.

An 'image-by-image' analysis
An analysis of the transcription complexes by electron cryomicroscopy allows a molecule to be observed in a hydrated state close to its natural state. Each photograph, taken using a microscope, shows thousands of specimens of the same molecule from different angles and at different instants in their reaction cycle. The statistical analysis of these images performed by Patrick Schultz's team revealed different conformations in three dimensions, which correspond to different stages of transcription initiation. 'We performed image-by-image sequencing and made a film of the initial stages of transcription,' says Schultz.

The factor TFIID, the main player in the transcription process
Patrick Schultz's team is interested in a complex protein that acts as an assembly platform in the initiation phase of transcription: the factor TFIID. Through interaction with the activator Rap1, bound upstream from the gene to be transcribed, it is attracted to the DNA and binds onto it. Combined with another factor, TFIIA, it changes conformation and allows the RNA polymerase to initiate transcription. The original aspect of this mechanism is based on the formation of a DNA loop, which allows the RNA polymerase to be positioned exactly at the start of the sequence of the gene to be transcribed.

The structure of the transcription factor TFIID obtained after image analysis is represented in yellow on an electron cryomicroscopy image background, showing the frozen hydrated molecules in dark grey. The transcription activator Rap1 (red) interacts with the factor TFIIA (blue) and contributes to forming a DNA loop (green).

What is electron cryomicroscopy?
The biological molecules in living organisms exist in an aqueous environment, which must be preserved whilst observing the molecules. In order to be 'seen', however, molecules must be placed in an electron microscope, which operates in a vacuum and dehydrates the sample. The solution, developed in the 1980s, is to use refrigeration to keep the specimen hydrated and to examine it by electron cryomicroscopy. A very thin film (approximately 100 nm, or one ten-thousandth of a millimetre thick) of the suspension containing the sample to be analysed must be created in order to be transparent to electrons. (Thin film shown in light blue in Figure A.) This film is cooled very rapidly (at a rate of approximately 10,000°C per second) by plunging it into liquid ethane cooled to -170°C. This freezing speed prevents the formation of ice crystals, and the sample (yellow in Figure A) is trapped in a layer of vitrified water. The cold chain must be maintained throughout the observation period using a cold plate. The molecules (dark grey in Figure B) are hydrated and observed without contrast agent.

From sciencedaily.com

Dark Pulses from Quantum-Dot Laser

When you think of a laser, you probably imagine a continuous beam of light. But many lasers emit incredibly intense and short pulses of light--these "pulsed lasers" are used in medical and laboratory devices, and in industrial equipment. Now, researchers have developed a new type of pulsed laser that uses quantum dots to emit bursts not of light, but of darkness--a trick that could prove useful for optical communication and rapid chemical analysis.

On again, off again: The dark pulse laser, which uses quantum dots, is seen here as the thin strip attached to wires.

The new "dark pulse laser" was developed by scientists at the National Institute of Standards and Technology (NIST) and the research institute JILA in Boulder, CO. The NIST laser emits light punctuated with extremely short bursts of darkness. "Think of it as a continuous wave laser, except with a really fast shutter," says Richard Mirin, a scientist at NIST.

This shutter creates dark pulses that last 90 picoseconds. This speed of operation could help scientists probe ultrafast chemical and biological reactions. A dark pulse laser could also be used in a fiber-optic telecommunication scheme where information would be encoded as dark pulses, which tend to be able to travel long distances without degrading in quality.

The pulses are generated by quantum dots inside a chip made of ultrathin layers of semiconducting materials. A periodic drop in intensity of about 70 percent is caused by a mismatch in the speed with which the quantum dots and the surrounding materials interact with the electrical current and internally produced photons. Semiconductor lasers are already found in telecommunications systems, DVD players, and laser pointers. But this laser design is different in that it uses quantum dots--atom-sized structures that emit light when excited--to produce dark pulses. 

NIST's Mirin says that the group initially wanted to make a bright pulse laser using quantum dots. Quantum dots can be used to make lasers that have a broad range of colors. "It turns out that the process of discovery led us to something interesting with this particular [quantum dot] configuration," he says.

Dirk Englund, professor of electrical engineering and applied physics at Columbia University, says that the dark pulses created by the scientists at NIST and JILA are similar to well-known quasi-particles called "dark solitons." Regular solitons are pulses of light that are passed through a special optical material that keeps them from dispersing, or losing energy over a distance. Dark solitons are the "absence of energy in a continuous beam background," Englund says.

But it is difficult to generate dark solitons, which is why the technique hasn't been used in telecommunications, says Mirin. The setup is cumbersome, and sometimes only a single dark soliton is produced. The new dark pulse laser makes it easier to produce a soliton-like effect, says Mirin.

"While it does not appear that these dark pulses are actually solitons," says Englund, "they are similar and could prove useful in communications and optical measurements applications."

It is too early to promise that dark pulses will revolutionize telecommunications, Mirin says. Since today's communications systems use bright pulses of light, optical fibers have been engineered to reduce the amount of energy lost due to dispersion, which means dark pulses couldn't travel effectively along existing fiber. Dark pulse lasers would need their own specially engineered type of fiber. Still, he's encouraged that the discovery of a compact and reliable source of dark pulses could open up new areas of research.

By Kate Greene 
From Technology Review

Plastic Antibodies Fight Toxins

For the first time, researchers have shown that a nonbiological molecule called a plastic antibody can work just like a natural antibody. In animal tests, the plastic particles bind to and neutralize a toxin found in bee stings; the toxin and antibody are then cleared to the liver, the same path taken by natural antibodies. Researchers are now developing plastic antibodies for a wider range of disease targets in hopes of broadening the availability of antibody therapies, which are currently very expensive.

Toxic target: The toxin melittin, labeled purple in these fluorescent images, spreads throughout the body of an untreated mouse, shown at bottom. The mouse at top has been injected with an artificial antibody, also fluorescently labeled, that binds to the toxin and takes it to the liver. The spread of the toxin throughout the treated mouse’s body is also more limited, which is why less of its body appears purple in this image.

For more than 20 years, biochemists have attempted to mimic antibodies' ability to zero in on their targets, as part of a strategy to make more effective and cheaper therapeutics and diagnostics. "Though antibodies are produced on an industrial scale today because they're so important, the cost is very, very high," says Kenneth Shea, professor of chemistry at the University of California, Irvine. That's because antibodies are grown in animals; they're complex molecules that can't be made in a test tube, or even by bacteria. And antibodies, like other proteins, are very fragile. Even under refrigeration, they last just months. The question Shea and others have asked for 20 years, he says, is "would it be possible to design them from inexpensive, abiotic starting materials?" Such plastic antibodies could be made cheaply and then sit on the shelf, in theory, for years.

In 2008, Shea's group, working with researchers from the Tokyo Institute of Technology, demonstrated for the first time that plastic antibodies made using a technique called molecular imprinting could bind to a target as strongly and specifically as natural antibodies. Molecular imprinting involves synthesizing a polymer in the presence of a target molecule. The polymer grows around the target, "imprinting" it with the target's shape. It's analogous to making a plaster cast of one's hand, says Shea. Looking to the properties of natural antibodies, Shea's group tailored the method for making polymers that more specifically target large proteins in biological solutions. Antibodies and their targets fit together like a key in a lock, or like a hand into a plaster cast. But they are also bound to their targets by chemistry and attracted by electrical interactions. Shea's methods involve looking to the properties of the target molecule and selecting starting materials that have an affinity for that target--in this case the protein melittin, the toxin in bee stings. At the same time, the method screens for starting materials that are not attracted to other, more common blood proteins. And the group took care to make the plastic antibody smaller than previous molecularly imprinted polymers, which were too big to be recognized by the body.

Shea's plastic antibody targeting melittin performed well in test tubes, but there was still some skepticism whether it would work in the complex environment of the body. This month in Journal of the American Chemical Society, the University of California researchers describe promising studies in mice. The researchers attached different fluorescent imaging probes to melittin and to the plastic antibody, injected them into the mice, and watched what happened in real time. Because the probes were two different colors, the researchers were able to watch as the polymer met its target in vivo, and as the two were then cleared to the liver. In mice given only the toxin and not the antidote, the mice's symptoms were much worse, and the toxin was more widely distributed throughout the body.

"They show that these materials are biocompatible and really act like antibodies--it's kind of surprising," says Ken Shimizu, professor of biochemistry at the University of South Carolina. Researchers had suspected that the body might not recognize the plastic particles as antibodies and thus they would be ineffective, or that they might get gummed up with other particles in the complex mixture that is the bloodstream.

Shea says that he's been contacted by several pharmaceutical companies that are interested in seeing how the work develops. David Spivak, professor of chemistry at Louisiana State University, agrees that the method is "a general strategy that will work again and again." "These particles have huge advantages in terms of stability and low cost," says Spivak. "I just hope this work is reproducible for many different targets."

The California researchers developed their imprinting methods using melittin because it's relatively inexpensive and easy to obtain, and it's a good representative of a class of small protein toxins, some of which are much more deadly. "Our next steps are to pursue more serious toxins," says Shea.

By Katherine Bourzac 
From Technology Review