Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........


Showing posts with label MATTER AND ENERGY. Show all posts
Showing posts with label MATTER AND ENERGY. Show all posts

Physicists Crack Fusion Mystery

One reason it's taking decades to develop fusion reactors that can generate electricity is that physicists don't completely understand what's going on in the high-temperature plasma inside a reactor. Under certain conditions, the plasma—which is where fusion reactions take place—disappears in under a millisecond.

Plasma chamber: This experimental fusion reactor at MIT could test the new theory.


A new theory developed by researchers at the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) explains what happens just before the plasma disappears. The explanation could help engineers design better reactors. And that might help them increase the power output of a reactor, perhaps doubling the electricity they could produce, and making fusion reactors more economical.

Researchers have made a lot of progress on fusion technology—since 1970, the energy produced in experimental fusion reactors has increased by about 12 orders of magnitude, greater than the improvement in processing power in microchips over the same period, says Martin Greenwald, a fusion researcher at MIT. But for all the improvements in fusion research reactors, they still aren't useful—they don't produce more energy than they consume, and they can't be run continuously, both of which would be necessary for a power plant.

The new work, like so much in the realm of fusion research, is a step toward practical fusion power, but by no means does it solve all the problems. Based on experiments, there is a practical limit to how dense the plasma in a reactor can be. Beyond a certain density, the plasma becomes unstable, dissipates its energy, and disappears. Because researchers don't understand exactly what causes this, it's difficult to predict exactly when the collapse will happen, so researchers avoid getting close to that limit in experimental reactors.  The Princeton work allows engineers to better predict what will happen in the reactor, potentially allowing them to design reactors that get closer to a theoretically optimum density for the plasma. That, in turn, could increase the amount of power a fusion power plant could generate.

According to the researchers' theory, islands develop within the plasma that cool off and cause the plasma to disappear. These islands—which are easily identified—could be selectively heated with microwaves, the researchers think, which could keep the plasma stable.

David Gates, a principal research scientist at PPPL and one of the key researchers on the project, says he expects they will be able to test the theory in research reactors this year.

While the theory is plausible, Greenwald says, it doesn't solve all the problems for reactors. It only explains part of the mechanisms involved in limiting the density of the plasma. And researchers still need to solve many practical problems before optimizing energy density is even an issue, he says.

Solving these problems will require a combination of better theories, more computing power, better algorithms, and big experiments. That's why researchers still say practical fusion power plants remain decades away.

By Kevin Bullis
From Technology Review

Generating Power from Salty Water: Unique Salt Allows Energy Production to Move Inland

"We are taking two technologies, each having limitations, and putting them together," said Bruce E. Logan, Kappe Professor of Environmental Engineering. "Combined, they overcome the limitations of the individual technologies."

 Microbial reverse dialysis test cell. (Credit: Penn State, Dept of Public Information)

The technologies Logan refers to are microbial fuel cells (MFC) -- which use wastewater and naturally occurring bacteria to produce electricity -- and reverse electrodialysis (RED) -- which produces electricity directly from the salinity gradient between salty and fresh water. The combined technology creates a microbial reverse-electrodialysis cell (MRC). The researchers describe MRCs in  the March 1 edition of Science Express.

RED stacks extract energy from the ionic difference between fresh water and salt water. A stack consists of alternating ion exchange membranes -- positive and negative -- with each RED membrane pair contributing additively to the electrical output. Unfortunately, using only RED stacks to produce electricity is difficult because a large number of membranes is required when using water at the electrodes, due to the need for water electrolysis.

Using exoelectrogenic bacteria -- bacteria found in wastewater that consume organic material and produce an electric current -- reduces the number of stacks needed and increases electric production by the bacteria.

Logan, working with Roland Cusick, graduate student in environmental engineering, and postdoctoral fellow Younggy Kim, placed a RED stack between the electrodes of an MFC to form the MRC.

While the researchers previously showed that an MRC can work with natural seawater, the organic matter in water will foul the membranes without extensive precleaning and treatment of the water. Seawater use restricts MRC operation to coastal areas, but food waste, domestic waste and animal waste contain about 17 gigawatts of power throughout the U.S. One nuclear reactor typically produces 1 gigawatt.

Rather than rely on seawater, the researchers used ammonium bicarbonate, an unusual salt. An ammonium bicarbonate solution works similarly to seawater in the MRC and will not foul the membranes. The ammonium bicarbonate is also easily removed from the water above 110 degrees Fahrenheit. The ammonia and carbon dioxide that make up the salt boil out, and are recaptured and recombined for reuse.

"Waste heat makes up 7 to 17 percent of energy consumed in industrial processes," said Logan. "There is always a source of waste heat near where this process could take place and it usually goes unused."

The researchers tested their ammonium bicarbonate MRC and found that the initial production of electricity was greater than that from an MRC using seawater.

"The bacteria in the cell quickly used up all the dissolved organic material," said Logan. "This is the portion of wastewater that is usually the most difficult to remove and requires trickling filters, while the particulate portion which took longer for the bacteria to consume, is more easily removed."

The researchers tested the MRC only in a fill and empty mode, but eventually a stream of wastewater would be run through the cell. According to Logan, MRCs can be configured to produce electricity or hydrogen, making both without contributing to greenhouse gases such as carbon dioxide. The MRC tested produced 5.6 watts per square meter.

Logan also said not having to process wastewater would save about 60 gigawatts.
The King Abdullah University of Science and Technology supported this work.

From sciencedaily

Cambridge team uses solar cells in OLED screen to power smartphones

A team of researchers at the University of Cambridge are getting closer. Their idea is to harvest energy from wasted light in an OLED display. They are working on technology where users will not need to plug in their smartphones for recharging at least as often. In their project, an OLED screen uses solar cells to absorb scattered and wasted light, sending it back into the screen. 

 A thin-film system harvests energy from wasted light in an OLED display.

IEEE Fellow Arokia Nathan along with the Cambridge team have developed a prototype device that converts ambient light into electricity. Solar cells used in the prototype are made of thin film hydrogenated amorphous silicon, within the smartphone screen. 

Only around 36 percent of the light produced by an OLED display is projected forwards; the rest escapes around the edges, in the form of scatter and bleeding from the edges. The researchers worked on a solution where they could harvest what’s lost by installing photovoltaic cells on the back and sides of OLED screens to capture the loss. 

They also worked out a solution—a thin-film transistor circuit--to even out the voltage spikes produced by the solar cells, as fluctuations in the voltage provided by the solar cell could damage the phone’s battery. The device captures both ambient light and the otherwise wasted screen light leaking around the edges.

According to reports, the team worked with the energy group at Cambridge's Centre for Advanced Photonics and Electronics to integrate a thin-film supercapacitor for intermediate energy storage. 

The end result is a system that makes use of photovoltaics, transistors, and supercapacitor. The system would achieve an efficiency of 11 percent and peak efficiency, 18 percent.

The numbers, for the smartphone user, would promise at least less strain on their battery. The Cambridge team’s effort is not promising “never-again” recharging but an ability for the user to save a fraction of power.

More work is ahead. The team is exploring different circuit designs and materials with the aim of increasing the energy harvesting system’s efficiency. Nathan has said other energy scavenging schemes such as MEMS based kinetic energy harvesting may bring improvements.

By Nancy Owano
From physorg.
 

Metal Oxide Simulations Could Help Green Technology

 Computer simulations show that metal oxides in water go through many short-lived shapes and structures.

Their work appears in the current issue of the journal Nature Materials.

The new paradigm could lead to a better understanding of corrosion and how toxic minerals leach from rocks and soil. It could also help in the development of "green" technology: new types of batteries, for example, or catalysts for splitting water to produce hydrogen fuel.

"This is a global change in how people should view these processes," said William Casey, UC Davis professor of chemistry and co-author of the study with James Rustad, a former geology professor at UC Davis who now works as a scientist at Corning Inc. in New York.

Previously, when studying the interactions of water with clusters of metal oxides, researchers tried to pick and study individual atoms to assess their reactivity. But "none of it really made sense," Rustad said.

Using computer simulations developed by Rustad, and comparing the resulting animations with lab experiments by Casey, the two found that the behavior of an atom on the surface of the cluster can be affected by an atom some distance away.

Instead of moving through a sequence of transitional forms, as had been assumed, metal oxides interacting with water fall into a variety of "metastable states" -- short-lived intermediates, the researchers found.

For example, in one of Rustad's animations, a water molecule approaches an oxygen atom on the surface of a cluster. The oxygen suddenly pulls away from another atom binding it into the middle of the cluster and leaps to the water molecule. Then the structure collapses back into place, ejecting a spare oxygen atom and incorporating the new one.

From sciencedaily

New Process Makes Heat-Harvesting Materials Cheaply

High-efficiency thermoelectric materials could lead to new types of cooling systems, and new ways to scavenge waste heat for electricity. Researchers at Rensselaer Polytechnic Institute in Troy, New York, have now developed an easy, inexpensive process to make such materials.

The materials made by the RPI team already perform as well as those on the market, and the new process, which involves zapping chemicals in a microwave oven, offers room for improvement. "We haven't even optimized the process yet," says Ganpati Ramanath, a materials science and engineering professor at RPI. "We're confident that we can increase the efficiency further."

 Cooked to order: Zapping raw materials in a microwave oven and drying the resulting solution produces a black powder (top) made of hexagonal bismuth telluride nanoplates (bottom).

Thermoelectric materials convert heat into electricity, and vice versa. They are used in niche applications such as power generation on spacecraft and temperature-controlled car seats. If they were cheaper and more efficient, they could perhaps be used to make lightweight refrigerators, cooling systems for computer chips and buildings, and for using car exhaust heat to power electronics such as headlights and the radio. 

Good thermoelectrics need to conduct electricity well but heat poorly. One way to boost the heat-transfer efficiency of such materials is to give them nanoscale features that block the flow of heat without restricting electric current.

Researchers have made nanostructured materials by breaking up crystals into fine powder. But this process is energy intensive and only results in high-efficiency p-type thermoelectric materials—the kind rich in positively charged particles called holes. But both p-type and n-type materials (which have an abundance of electrons) are needed for practical devices. 

"We've shown that we can make both p- and n-type materials, and we can do this very scalably and more cost-effectively," Ramanath says. "We can make gram quantities in minutes." 

Ramanath and his colleagues make a solution from raw materials such as tellurium and bismuth chloride in an organic solvent, and put it in a domestic microwave oven for two to three minutes. They get a solution containing hexagonal nanoplates, which they press together and heat to make nanopellets. By using a solvent containing sulfur, the researchers get sulfur-doped nanoplates that are n-type. 

The technique, presented in a Nature Materials paper posted online last week, makes p-type materials that are as efficient as the best ones on the market, while the n-type materials are at least 25 percent more efficient. One of the biggest commercial thermoelectric device manufacturers is now interested in adopting the new materials and process.

"This is the first nanostructured n-type mat with a high [efficiency] value," says John Badding, a professor of chemistry at Penn State University.

The key breakthrough of the RPI work, according to Badding, is that the researchers are building the nanostructured materials from the bottom up using chemistry. This means they can fine-tune the properties of the building blocks and their assembly to improve the material's properties. "The way they're making the material is a big deal," he says. "The hope is that in the future, this type of approach could lead to better [efficiency]."

By Prachi Patel
From Technology Review

Chemists Solve an 84-Year-Old Theory On How Molecules Move Energy After Light Absorption

Conservation of angular momentum is a fundamental property of nature, one that astronomers use to detect the presence of satellites circling distant planets. In 1927, it was proposed that this principle should apply to chemical reactions, but a clear demonstration has never been achieved.

 MSU chemist Jim McCusker and postdoctoral researcher Dong Guo proved an 84-year-old theory.

In the current issue of Science, MSU chemist Jim McCusker demonstrates for the first time the effect is real and also suggests how scientists could use it to control and predict chemical reaction pathways in general.

"The idea has floated around for decades and has been implicitly invoked in a variety of contexts, but no one had ever come up with a chemical system that could demonstrate whether or not the underlying concept was valid," McCusker said. "Our result not only validates the idea, but it really allows us to start thinking about chemical reactions from an entirely different perspective."

The experiment involved the preparation of two closely related molecules that were specifically designed to undergo a chemical reaction known as fluorescence resonance energy transfer, or FRET. Upon absorption of light, the system is predisposed to transfer that energy from one part of the molecule to another.

McCusker's team changed the identity of one of the atoms in the molecule from chromium to cobalt. This altered the molecule's properties and shut down the reaction. The absence of any detectable energy transfer in the cobalt-containing compound confirmed the hypothesis.

"What we have successfully conducted is a proof-of-principle experiment," McCusker said. "One can easily imagine employing these ideas to other chemical processes, and we're actually exploring some of these avenues in my group right now."

The researchers believe their results could impact a variety of fields including molecular electronics, biology and energy science through the development of new types of chemical reactions.

Dong Guo, a postdoctoral researcher, and Troy Knight, former graduate student and now research scientist at Dow Chemical, were part of McCusker's team. Funding was provided by the National Science Foundation.

From sciencedaily

Gasoline Fuel Cell Would Boost Electric Car Range

If you want to take an electric car on a long drive, you need a gas-powered generator, like the one in the Chevrolet Volt, to extend its range. The problem is that when it's running on the generator, it's no more efficient than a conventional car. In fact, it's even less efficient, because it has a heavy battery pack to lug around.

 Gas guzzler: The fuel cell developed at the University of Maryland.

Now researchers at the University of Maryland have made a fuel cell that could provide a far more efficient alternative to a gasoline generator. Like all fuel cells, it generates electricity through a chemical reaction, rather than by burning fuel, and can be twice as efficient at generating electricity as a generator that uses combustion.

The researchers' fuel cell is a greatly improved version of a type that has a solid ceramic electrolyte, and is known as a solid-oxide fuel cell. Unlike the hydrogen fuel cells typically used in cars, solid-oxide fuel cells can run on a variety of readily available fuels, including diesel, gasoline, and natural gas. They've been used for generating power for buildings, but they've been considered impractical for use in cars because they're far too big and because they operate at very high temperatures—typically at about 900 ⁰C.

By developing new electrolyte materials and changing the cell's design, the researchers made a fuel cell that is much more compact. It can produce 10 times as much power, for its size, as a conventional one, and could be smaller than a gasoline engine while producing as much power.

The researchers have also lowered the temperature at which the fuel cell operates by hundreds of degrees, which will allow them to use cheaper materials. "It's a huge difference in cost," says Eric Wachsman, director of the University of Maryland Energy Research Center, who led the research. He says the researchers have identified simple ways to improve the power output and reduce the temperature further still, using methods that are already showing promising results it the lab. These advances could bring costs to a point that they are competitive with gasoline engines. Wachsman says he's in the early stages of starting a company to commercialize the technology.

Wachsman's fuel cells currently operate at 650 ⁰C, and his goal is to bring that down to 350 ⁰C for use in cars. Insulating the fuel cells isn't difficult since they're small—a fuel cell stack big enough to power a car would only need to be 10 centimeters on a side. High temperatures are a bigger problem because they make it necessary to use expensive, heat-resistant materials within the device, and because heating the cell to operating temperatures takes a long time. By bringing the temperatures down, Wachsman can use cheaper materials and decrease the amount of time it takes the cell to start.
Even with these advances, the fuel cell wouldn't come on instantly, and turning it on and off with every short trip in the car would cause a lot of wear and tear, reducing its lifetime. Instead, it would be paired with a battery pack, as a combustion engine is in the Volt, Wachsman says. The fuel cell could then run more steadily, serving to keep the battery topped without providing bursts of acceleration.

The researchers achieved their result largely by modifying the solid electrolyte material at the core of a solid-oxide fuel cell. In fuel cells on the market, such as one made by Bloom Energy, the electrolyte has to be made thick enough to provide structural support. But the thickness of the electrolyte limits power generation. Over the last several years, researchers have been developing designs that don't require the electrolyte to support the cell so they can make the electrolyte thinner and achieve high power output at lower temperatures. The University of Maryland researchers took this a step further by developing new multilayered electrolytes that increase the power output still more.

The work is part of a larger U.S. Department of Energy effort, over the past decade, to make solid-oxide fuel cells practical. The first fruits of that effort likely won't be fuel cells in cars—so far, Wachsman has only made relatively small fuel cells, and significant engineering work remains to be done. The first applications of solid oxide fuels in vehicles may be on long-haul trucks with sleeper cabs.

Equipment suppliers such as Delphi and Cummins are developing fuel cells that can power the air conditioners, TVs, and microwaves inside the cabs, potentially cutting fuel consumption by 85 percent compared to idling the truck's engine. The Delphi system also uses a design that allows for a thinner electrolyte, but it operates at higher temperatures than Wachsman's fuel cell. The fuel cell could be turned on Monday, and left to run at low rates all week and still get the 85 percent reduction. Delphi has built a prototype and plans to demonstrate its system on a truck next year. 

By Kevin Bullis
From Technology Review

Startup to Capture Lithium from Geothermal Plants

As portable electronics get more popular and the market for electric vehicles takes off, demand for lithium—a critical element in rechargeable lithium-ion batteries—could soar. Yet just two countries, Chile and Australia, dominate global lithium production.

 Brine time: A Simbol Materials engineer works on equipment used to separate lithium, manganese, and zinc from geothermal brine.

 California startup Simbol Materials thinks it can increase domestic production of lithium by extracting the element, along with manganese and zinc, from the brine used by geothermal plants.

In the late 1990s, the U.S. produced 75 percent of the world's lithium carbonate, but now it makes only 5 percent. This is, in part, because U.S. manufacturers couldn't compete with low-cost lithium chemicals from Chile. The U.S. produces no manganese at all. "Yet we have this resource, already being harnessed for geothermal power production," says Luka Erceg, Simbol's CEO. "This is an enormous opportunity to harvest clean renewable energy and produce critical materials in a sustainable manner." 

Worldwide demand for lithium chemicals was about 102,000 tons in 2010. This is expected to go up to as much as 320,000 by 2020, mostly because of increased electric-vehicle use. The world's largest lithium resources are estimated by the U.S. Geological Survey to be in Bolivia. Most manufacturers, including the world's largest, in Chile, typically make the material by pumping brine into pools to evaporate in the sun for 18 to 24 months. This process leaves behind a concentrated lithium chloride that's converted into lithium carbonate. The only U.S. producer, Chemetall Foote, drills for brine at Silver Peak in Nevada. 

Simbol plans to piggyback on a 50-megawatt geothermal plant near the Salton Sea in Imperial Valley, California, that pumps hot brine from deep underground to generate steam to drive a turbine. The plant currently injects the brine, which contains 30 percent dissolved solids, including lithium, manganese, and zinc, back into the ground after the steam is produced. Simbol will divert the brine from the power plant, before reinjection, into its processing equipment. There, the still-warm brine will flow through a proprietary medium that filters out the salts within hours. Simbol has also acquired the assets and intellectual property from a now-defunct Canadian company for a purification process that creates the world's highest-purity lithium carbonate. Erceg expects to compete with the lowest-cost Chilean producers, which produce lithium at $1,500 a ton.

Simbol currently runs a pilot plant that filters 20 gallons a minute. The commercial plant, near Salton Sea, will begin construction in 2012 and will have the capacity to produce 16,000 tons of lithium carbonate annually. The world's third-largest producer, by comparison, makes 22,000 tons. By 2020, Simbol plans to triple production by expanding to more geothermal plants, Erceg says. But for now, it is buying low-grade lithium carbonate from other manufacturers for purification, and it expects to sell the high-purity product overseas before the end of this year. 

Other lithium-mining projects are planned or underway around the world, including two more in Nevada. Keith Evans, a geologist and industrial minerals expert, says that if they all come online, global production in 2020 could be over 426,000 tons, far outstripping demand. Nevertheless, more U.S. production could make the country self-sufficient. Plus, he says, Simbol could have an advantage over other U.S. companies. "If their process is as good as they say it is, it could be a very-low-cost producer," Evans says. "It is potentially a very exciting project, if it works."

By Prachi Patel
From Technology Review

A Super-Absorbent Solar Material

A new nanostructured material that absorbs a broad spectrum of light from any angle could lead to the most efficient thin-film solar cells ever.

Light catcher: This scanning electron microscope image shows the super absorbent nanostructures, which measure 400 nanometers at their base.

Researchers are applying the design to semiconductor materials to make solar cells that they hope will save money on materials costs while still offering high power-conversion efficiency. Initial tests with silicon suggest that this kind of patterning can lead to a fivefold enhancement in absorbance. 

Conventional solar cells are typically a hundred micrometers or more thick. Researchers are working on ways to make thinner solar cells, on the order of hundreds of nanometers thick rather than micrometers, with the same performance, to lower manufacturing costs. However, a thinner solar cell normally absorbs less light, meaning it cannot generate as much electricity.

Some researchers are turning to exotic optical effects that emerge at the nanoscale to solve this conundrum. Harry Atwater, a professor of applied physics and materials science at Caltech and a pioneer of the field, has now come up with a way of patterning materials at the nanoscale that turns them into solar super-absorbers.

Atwater worked with Koray Aydin, now an assistant professor of electrical engineering and computer science at Northwestern University, to develop the super-absorber design, which takes advantage of a phenomenon called optical resonance. Just as a radio antenna will resonate with and absorb certain radio waves, nanostructured optical antennas can resonate with and absorb visible and infrared light. The length of a structure determines what wavelength of light it will resonate with. So Atwater and Aydin designed structures that effectively have many different lengths: wedge shapes with pointy tips and wide bases. The thin, nanoscale wedges strongly absorb blue light at the tip and red light at the base.

Atwater and Aydin demonstrated this broadband effect in a 260-nanometer-thick film made of a layer of silver topped with a thin layer of silicon dioxide and finished with another thin layer of silver carved with arrays of wedges that are 40 nanometers at their tips. Atwater says they chose these materials because they are particularly challenging: in their unpatterned state, they're both highly reflective; but the patterned films can absorb an average of 70 percent of the light across the entire visible spectrum. This work is described in the online journal Nature Communications.

Kylie Catchpole, a research fellow at the Australian National University in Canberra, says the design is promising because it works over a broad band of the spectrum. These effects, Catchpole says, "are usually very sensitive to wavelength." However, she notes, the designs will have to be applied to other materials to work in solar cells.

Aydin and Atwater are now doing just that. The researchers have made a 220-nanometer-thick silicon film that absorbs the same amount of light as an unpatterned film 25 times thicker.

By Katherine Bourzac
From Technology Review

New Method of Growing High-Quality Graphene Promising for Next-Gen Technology

Kaustav Banerjee, a professor with the Electrical and Computer Engineering department and Director of the Nanoelectronics Research Lab at UCSB that has been studying carbon nanomaterials for more than seven years, led the research team to perfect methods of growing sheets of graphene, as detailed in a study to be published in the November 2011 issue of the journal Carbon.

"Our process has certain unique advantages that give rise to high quality graphene," says Banerjee. "For the electronics industry to effectively use graphene, it must first be grown selectively and in larger sheets. We have developed a synthesis technique that yields high- quality and high-uniformity graphene that can be translated into a scalable process for industry applications."

 UCSB researchers have successfully controlled the growth of a high-quality bilayer graphene on a copper substrate using a method called chemical vapor deposition (CVD), which breaks down molecules of methane gas to build graphene sheets with carbon atoms.

Using adhesive tape to lift flakes of graphene from graphite, University of Manchester researchers Geim and Novoselov were awarded the 2010 Nobel Prize in Physics for their pioneering isolation and characterization of the material. To launch graphene into futuristic applications, however, researchers have been seeking a controlled and efficient way to grow a higher quality of this single-atom-thick material in larger areas.

The discovery by UCSB researchers turns graphene production into an industry-friendly process by improving the quality and uniformity of graphene using efficient and reproducible methods. They were able to control the number of graphene layers produced -- from mono-layer to bi-layer graphene -- an important distinction for future applications in electronics and other technology.

"Intel has a keen interest in graphene due to many possibilities it holds for the next generation of energy- efficient computing, but there are many roadblocks along the way," added Intel Fellow, Shekhar Borkar. "The scalable synthesis technique developed by Professor Banerjee's group at UCSB is an important step forward."

As a material, graphene is the thinnest and strongest in the world -- more than 100 times stronger than diamond -- and is capable of acting as an ultimate conductor at room temperature. If it can be produced effectively, graphene's properties make it ideal for advancements in green electronics, super strong materials, and medical technology. Graphene could be used to make flexible screens and electronic devices, computers with 1,000 GHz processors that run on virtually no energy, and ultra-efficient solar power cells.

Key to the UCSB team's discovery is their understanding of graphene growth kinetics under the influence of the substrate. Their approach uses a method called low pressure chemical vapor deposition (LPCVD) and involves disintegrating the hydrocarbon gas methane at a specific high temperature to build uniform layers of carbon (as graphene) on a pretreated copper substrate. Banerjee's research group established a set of techniques that optimized the uniformity and quality of graphene, while controlling the number of graphene layers they grew on their substrate.

According to Dr. Wei Liu, a post-doctoral researcher and co-author of the study, "Graphene growth is strongly affected by imperfection sites on the copper substrate. By proper treatment of the copper surface and precise selection of the growth parameters, the quality and uniformity of graphene are significantly improved and the number of graphene layers can be controlled."

Professor Banerjee and credited authors Wei Liu, Hong Li, Chuan Xu and Yasin Khatami are not the first research team to make graphene using the CVD method, but they are the first to successfully refine critical methods to grow a high quality of graphene. In the past, a key challenge for the CVD method has been that it yields a lower quality of graphene in terms of carrier mobility -- or how well it conducts electrons. "Our graphene exhibits the highest reported field-effect mobility to date for CVD graphene, having an average value of 4000 cm2/V.s with the highest peak value at 5500 cm2/V.s. This is an extremely high value compared with the mobility of silicon." added Hong Li, a Ph.D. candidate in Banerjee's research group.

"Kaustav Banerjee's group is leading graphene nanoelectronics research efforts at UCSB, from material synthesis to device design and circuit exploration. His work has provided our campus with unique and very powerful capabilities," added David Awschalom, Professor of Physics, Electrical and Computer Engineering, and Director of the California NanoSystems Institute (CNSI) at UCSB where Banerjee's laboratory is located. "This new facility has also boosted our opportunities for collaborations across various science and engineering disciplines."

"There is no doubt graphene is a superior material. Intrinsically it is amazing," says Banerjee. "It is up to us, the scientists and engineers, to show how we can use graphene and harness its capabilities. There are challenges in how to grow it, how to transfer or not to transfer and pattern it, and how to tailor its properties for specific applications. But these challenges are fertile grounds for exciting research in the future."

Their research was supported by the National Science Foundation and conducted at the California NanoSystems Institute (CNSI) and Materials Research Laboratory (MRL) facilities at UC Santa Barbara.

Dipping May Improve Ultracapacitors and Batteries

A simple trick could improve the ability of advanced ultracapacitors, or supercapacitors,  to store charge. The technique, developed by Stanford University researchers, could enable the use of new types of nanostructured electrode materials that store more energy.

 Wrap up: Scanning electron microscope images show the surface of nanostructured graphene-manganese oxide electrodes covered with conductive carbon nanotubes (top) and a polymer (bottom).


While ultracapacitors provide quick bursts of power and can be recharged many more times than batteries without losing their storage capacity, they can store only a 10th as much energy as batteries, which limits their applications. To improve their energy density, researchers have focused on the use of electrode materials with greater surface area—such as graphene and carbon nanotubes—which can hold more charge-carrying ions.

The Stanford team, led by Yi Cui and Zhenan Bao, used composite electrodes made of graphene and manganese oxide. Manganese oxide is considered an attractive electrode material because, "one, manganese is abundant so it's very low cost," Cui says. "It also has high theoretical capacity to store ions for supercapacitors." However, in the past its use has been hindered by its low conductivity, which makes conveying ions in and out of the material difficult.

The researchers dipped the composite electrodes into either a carbon nanotube solution or a conductive polymer solution. The coating improves the electrodes' conductivity and hence their capacitance—their ability to store charge—by 20 percent and 45 percent respectively. The researchers report their work in a paper that appeared online in the journal Nano Letters. "This is an important advancement," says Lu-Chang Qin, a physics professor at the University of North Carolina at Chapel Hill, who has recently developed similar graphene–manganese oxide electrodes. These results "promise hopes for a new generation of supercapacitors," Qin says. However, he points out that the Stanford team has yet to measure the energy density of its new electrodes. Qin has collaborated with Japanese researchers to make electrodes from carbon nanotube graphene. These have an energy density of 155 watt-hours per kilogram, comparable with that of nickel–metal hydride batteries.

Bor Jang, co-founder of Nanotek Instruments in Dayton, Ohio, which makes graphene electrodes for supercapacitors, says the new electrodes may lack energy density. Besides, he says, "a combination of graphene, MnO2, and a conductive polymer or carbon nanotubes might be overkill."

Others have obtained much higher capacitance numbers with graphene–metal oxide or conductive polymer electrodes. However, Cui says what's most exciting about the new work is that such a simple dipping technique can enhance capacitance. He says the technique might be used to improve the conductivity of other electrode materials such as sulfur, silicon, and lithium manganese phosphate, thereby enhancing the performance of lithium-ion batteries. Cui and his colleagues are now working on improving battery electrodes using the new method.

By Prachi Patel
From Technology Review

A Simple Way to Boost Battery Storage

Lithium-ion battery electrodes bound together by a new highly conductive material have a much greater storage capacity—a development that could eventually increase the range of electric cars and the life of smart-phone batteries without increasing their cost. Unlike many high-capacity electrodes developed over the last few years, these can be made using the equipment already found in today's battery factories.


Battery binder: This microscopy image shows a silicon electrode before charging (left) and after 32 cycles. A new binder keeps the particles close together

The key is a stretchy, highly conductive polymer binder that can be used to hold together silicon, tin, and other materials that can store a lot of energy but that are ordinarily unstable. Researchers at the Lawrence Berkeley National Laboratory painstakingly engineered this new polymer binder and used it to make a silicon anode for a rechargeable lithium-ion battery with a storage capacity 30 percent greater than those on the market today. It's also more stable over time than previously developed electrodes.

When a lithium-ion battery is charged, lithium ions are taken up by one of the electrodes, called the anode. The more lithium the anode can hold, the more energy the battery can store. Silicon is one of the most promising anode materials: it can store 10 times more lithium than graphite, which is used to make the anodes in the lithium-ion batteries on the market today. "Graphite soaks up lithium like a sponge, holding its shape, but silicon is more like a balloon," says Gao Liu, a researcher at the Berkeley Lab's Environmental Energy Technologies Division. 

However, because the silicon anodes swell and shrink, changing in volume by three or four times as they're charged and discharged, the capacity of the battery fades over time. "After a few rounds of charge and discharge, pretty soon the silicon particles are not in touch with each other," which means the anode can't conduct electricity, says Liu.

One approach to the problem is to structure these anodes in a totally different way, for example growing shaggy arrays of silicon nanowires that can bend, swell, and move around as lithium enters and exits. This approach is being commercialized by Amprius, a startup in Palo Alto, California. But growing nanowires requires new processes that aren't normally used in battery manufacturing.

Today's anodes are made by painting a solvent-based slurry of graphite particles held together with a binder, a simple process that keeps costs low. The Berkeley researchers believe the key to making new battery materials like silicon work is to stick with this manufacturing process. That meant coming up with a rubbery binder that would stick to silicon particles, remain highly conductive in the harsh environment of the anode, and stretch and contract as the anode swells and deflates.

Most work on advanced batteries has focused on the active materials, but "we have pushed these materials to the limit," says Yury Gogotsi, professor of materials science and engineering at Drexel University. "Now what's limiting us are the binders." 

Reading through papers on silicon battery binders, Liu noticed that researchers were making "fatal mistakes"—choosing polymers that lose their conductivity in the kinds of conditions found in an anode, for example. He worked with theoretical chemists to come up with a list of polymers with the right electrical properties for the job. Once they found one, they altered it to make it much stickier. Once they developed and characterized this new material, they were able to make silicon anodes using conventional processes, and test them in batteries.

The Berkeley group's anodes have been tested in over 650 charging cycles. They maintain a storage capacity of 1,400 milliamp hours per gram—much greater than the 300 or so stored by conventional anodes. Full batteries incorporating the anodes store about 30 percent more total energy than a commercial lithium-ion battery. Typically, battery capacity increases by about 5 percent a year, Liu notes. He says they've tested the binder in other battery anodes, including those made of tin, that have similar potential and problems, and that it should work for any such materials.

The storage capacity of these batteries is nearly as good as those made from pure silicon nanowires with no binders, says Yi Cui, professor of materials science and engineering at Stanford and one of the founders of Amprius. That's impressive, he says, considering that the binder doesn't store any lithium.

Liu's group is now collaborating with researchers at 3M on the anode research. 3M is scaling up production of silicon-based battery materials designed to not expand quite so much during charging, says Kevin Eberman, who is developing battery materials products at 3M Electronics in St. Paul, Minnesota. But to make them work, a good binder is key. The company is providing the Berkeley group with materials to test. Liu says the Berkeley group has patented the binders, and is in talks with a few companies about ways to commercialize them.

By Katherine Bourzac
From Technology Review

Artificial light-harvesting method achieves 100% energy transfer efficiency

The researchers, led by Shinsuke Takagi from the Tokyo Metropolitan University and PRESTO of the Japan Science and Technology Agency, have published their study on their work toward an artificial LHS in a recent issue of the Journal of the American Chemical Society.

 By arranging porphyrin dye molecules on a clay surface using the “Size-Matching Effect,” researchers have demonstrated an energy transfer efficiency of approximately 100%, which is an important requirement for designing efficient artificial light-harvesting systems.

“In order to realize an artificial light-harvesting system, almost 100% efficiency is necessary,” Takagi told PhysOrg.com. “Since light-harvesting systems consist of many steps of energy transfer, the total energy transfer efficiency becomes low if the energy transfer efficiency of each step is 90%. For example, if there are five energy transfer steps, the total energy transfer is 0.9 x 0.9 x 0.9 x 0.9 x 0.9 = 0.59. In this way, an efficient energy transfer reaction plays an important role in realizing efficient sunlight collection for an artificial light-harvesting system.”

As the researchers explain in their study, a natural LHS (like those in purple bacteria or plant leaves) is composed of regularly arranged molecules that efficiently collect sunlight and carry the excitation energy to the system’s reaction center. An artificial LHS (or “artificial leaf”) attempts to do the same thing by using functional dye molecules. 

Building on the results of previous research, the scientists chose to use two types of porphyrin dye molecules for this purpose, which they arranged on a clay surface. The molecules’ tendency to aggregate or segregate on the clay surface made it challenging for the researchers to arrange the molecules in a regular pattern like their natural counterparts. 

“A molecular arrangement with an appropriate intermolecular distance is important to achieve nearly 100% energy transfer efficiency,” Takagi said. “If the intermolecular distance is too near, other reactions such as electron transfer and/or photochemical reactions would occur. If the intermolecular distance is too far, deactivation of excited dye surpasses the energy transfer reaction.” 

In order to achieve the appropriate intermolecular distance, the scientists developed a novel preparation technique based on matching the distances between the charged sites in the porphyrin molecules and the distances between negatively charged (anionic) sites on the clay surface. This effect, which the researchers call the “Size-Matching Rule,” helped to suppress the major factors that contributed to the porphyrin molecules’ tendency to aggregate or segregate, and fixed the molecules in an appropriate uniform intermolecular distance. As Takagi explained, this strategy is significantly different than other attempts at achieving molecular patterns.

“The methodology is unique,” he said. “In the case of usual self-assembly systems, the arrangement is realized by guest-guest interactions. In our system, host-guest interactions play a crucial role for realizing the special arrangement of dyes. Thus, by changing the host material, it is possible to control the molecular arrangement of dyes on the clay surface.”

As the researchers demonstrated, the regular arrangement of the molecules leads to an excited energy transfer efficiency of up to 100%. The results indicate that porphyrin dye molecules and clay host materials look like promising candidates for an artificial LHS.

“At the present, our system includes only two dyes,” Takagi said. “As the next step, the combination of several dyes to adsorb all sunlight is necessary. One of the characteristic points of our system is that it is easy to use several dyes at once. Thus, our system is a promising candidate for a real light-harvesting system that can use all sunlight. We believe that even photochemical reaction parts can be combined on the same clay surface. If this system is realized and is combined with a photochemical reaction center, this system can be called an ‘inorganic leaf.’”

By Lisa Zyga
From physorg

Inexpensive catalyst that makes hydrogen gas 10 times faster than natural enzyme

This step is just one part of a series of reactions to split water and make hydrogen gas, but the researchers say the result shows they can learn from nature how to control those reactions to make durable synthetic catalysts for energy storage, such as in fuel cells.

 The part of the catalyst that cranks out 100,000 molecules of hydrogen gas a second packs electrons into chemical bonds between hydrogen atoms, possibly hijacked from water.

In addition, the natural protein, an enzyme, uses inexpensive, abundant metals in its design, which the team copied. Currently, these materials -- called catalysts, because they spur reactions along -- rely on expensive metals such as platinum.

"This nickel-based catalyst is really very fast," said coauthor Morris Bullock of the Department of Energy's Pacific Northwest National Laboratory. "It's about a hundred times faster than the previous catalyst record holder. And from nature, we knew it could be done with abundant and inexpensive nickel or iron."

Stuffing Bonds
Electrical energy is nothing more than electrons. These same electrons are what tie atoms together when they are chemically bound to each other in molecules such as hydrogen gas. Stuffing electrons into chemical bonds is one way to store electrical energy, which is particularly important for renewable, sustainable energy sources like solar or wind power. Converting the chemical bonds back into flowing electricity when the sun isn't shining or the wind isn't blowing allows the use of the stored energy, such as in a fuel cell that runs on hydrogen.

Electrons are often stored in batteries, but Bullock and his colleagues want to take advantage of the closer packing available in chemicals.

"We want to store energy as densely as possible. Chemical bonds can store a huge amount of energy in a small amount of physical space," said Bullock, director of the Center for Molecular Electrocatalysis at PNNL, one of DOE's Energy Frontier Research Centers. The team also included visiting researcher Monte Helm from Fort Lewis College in Durango, Colo. 

Biology stores energy densely all the time. Plants use photosynthesis to store the sun's energy in chemical bonds, which people use when they eat food. And a common microbe stores energy in the bonds of hydrogen gas with the help of a protein called a hydrogenase.

Because the hydrogenases found in nature don't last as long as ones that are built out of tougher chemicals (think paper versus plastic), the researchers wanted to pull out the active portion of the biological hydrogenase and redesign it with a stable chemical backbone.

Two Plus Two Equals One
In this study, the researchers looked at only one small part of splitting water into hydrogen gas, like fast-forwarding to the end of a movie. Of the many steps, there's a part at the end when the catalyst has a hold of two hydrogen atoms that it has stolen from water and snaps the two together.

The catalyst does this by completely dismantling some hydrogen atoms from a source such as water and moving the pieces around. Due to the simplicity of hydrogen atoms, those pieces are positively charged protons and negatively charged electrons. The catalyst arranges those pieces into just the right position so they can be put together correctly. "Two protons plus two electrons equals one molecule of hydrogen gas," says Bullock.

In real life, the protons would come from water, but since the team only examined a portion of the reaction, the researchers used water stand-ins such as acids to test their catalyst.

"We looked at the hydrogenase and asked what is the important part of this?" said Bullock. "The hydrogenase moves the protons around in what we call a proton relay. Where the protons go, the electrons will follow."

A Bauble for Energy
Based on the hydrogenase's proton relay, the experimental catalyst contained regions that dangled off the main structure and attracted protons, called "pendant amines." A pendant amine moves a proton into position on the edge of the catalyst, while a nickel atom in the middle of the catalyst offers a hydrogen atom with an extra electron (that's a proton and two electrons for those counting).

The pendant amine's proton is positive, while the nickel atom is holding on to a negatively charged hydrogen. Positioned close to each other, the opposites attract and the conglomerate solidifies into a molecule, forming hydrogen gas.

With that plan in mind, the team built potential catalysts and tested them. On their first try, they put a bunch of pendant amines around the nickel center, thinking more would be better. Testing their catalyst, they found it didn't work very fast. An analysis of how the catalyst was moving protons and electrons around suggested too many pendant amines got in the way of the perfect reaction. An overabundance of protons made for a sticky catalyst, which pinched it and slowed the hydrogen-gas-forming reaction down.

Like good gardeners, the team trimmed a few pendant amines off their catalyst, leaving only enough to make the protons stand out, ready to accept a negatively charged hydrogen atom.

Fastest Cat in the West
Testing the trimmed catalyst, the team found it performed much better than anticipated. At first they used conditions in which no water was present (remember, they used water stand-ins), and the catalyst could create hydrogen gas at a rate of about 33,000 molecules per second. That's much faster than their natural inspiration, which clocks in at around 10,000 per second.

However, most real-life applications will have water around, so they added water to the reaction to see how it would perform. The catalyst ran three times as fast, creating more than 100,000 hydrogen molecules every second. The researchers think the water might help by moving protons to a more advantageous spot on the pendant amine, but they are still studying the details.

Their catalyst has a drawback, however. It's fast, but it's not efficient. The catalyst runs on electricity -- after all, it needs those electrons to stuff into the chemical bonds -- but it requires more electricity than practical, a characteristic called the overpotential.

Bullock says the team has some ideas on how to reduce the inefficiency. Also, future work will require assembling a catalyst that splits water in addition to making hydrogen gas. Even with a high overpotential, the researchers see high potential for this catalyst.
 

Energy Storage for Solar Power

BrightSource Energy has become the latest solar thermal power company to develop a system for generating power when the sun isn't shining. The company says the technology can lower the cost of solar power and make it more reliable, helping it compete with conventional sources of electricity.

 Stored sunlight: A rendering shows BrightSource’s new thermal storage design. The two large tanks will store molten salt, which can be used to generate steam to drive a turbine.

The company, based in Oakland, California, is building one of the world's largest solar thermal power plants. The 392-megawatt solar plant in Ivanpah, California, however, will not include the storage technology. Instead, BrightSource is working with utilities to determine which future projects could best benefit from storage. 

Solar thermal systems use mirrors to focus sunlight, generating temperatures high enough to produce steam to drive a turbine. One of the advantages of the solar thermal approach, versus conventional photovoltaics that convert sunlight directly into electricity, is that heat can be stored cheaply and used when needed to generate electricity. In all solar thermal plants, some heat is stored in the fluids circulating through the system. This evens out any short fluctuations in sunlight and lets the plant generate electricity for some time after the sun goes down. But adding storage systems would let the plant ride out longer periods of cloud cover and generate power well into, or even throughout, the night. Such long-term storage could be needed if solar is to provide a large share of the total power supply. 

BrightSource is using a variation on an approach to storage that's a decade old: heating up a molten salt—typically, a combination of sodium and potassium nitride—and then storing it in a tank. To generate electricity, the molten salt is pumped through a heat exchanger to generate steam. BrightSource CEO John Woolard says one big factor in making this technology economically attractive is the use of power towers—in which mirrors focus sunlight on a central tower—that generate higher temperatures than other solar thermal designs. That higher temperature makes it possible to store more energy using a smaller amount of molten salt. "It's a much more efficient system and much more cost effective, overall," he says. 

Storage allows a thermal power plant to run more hours in the day, so they can more quickly recover the cost of expensive steam turbines and generators. Woolard says that while a solar thermal plant without storage can generate electricity about 2,700 hours a year, BrightSource's storage system increases that to 4,300 hours. The increased output more than offsets the added cost of storage. A study from the National Renewable Energy Laboratory (NREL) in Golden, Colorado, estimates that storage in a power tower system could cut costs per kilowatt hour by 25 to 30 percent. 

At least two other companies are pairing power tower technologies with molten salt storage. Torresol Energy has built such a system at a 19.9-megawatt solar thermal power plant near Seville, Spain, and demonstrated that it can run the power plant through the night using stored heat. In the United States, Solar Reserve plans to build a power tower with molten salt storage in Riverside County, California.  

In addition to lowering costs, the storage system also improves the economics of solar thermal power by increasing the price that utilities are willing to pay for the electricity. Storage decreases the need for utilities to invest in backup power for smoothing out variations in power. Utilities will also pay a higher price for power they can count on at any given moment to make up for increases in demand. And the storage system lets the plant sell power into the evening, when power prices are higher in some locations. 

Storage technology may be essential if solar thermal technology is to compete with photovoltaic solar panels, which have been coming down in price, says Mark Mehos, an NREL researcher. "BrightSource plants that don't have energy storage probably generate electricity at about the same price as a plant that uses photovoltaics," he says. "So all things being equal, they would like to be able to deliver that at a higher value."

By Kevin Bullis
From Technology Review

Nanoscale Pillars Could Have a Big Role in Future Batteries

Tin, silicon, and a few other elements have long been languishing on chemists' list of electrode materials that could, in theory, help lithium-ion batteries hold more energy. A new way of structuring these materials could at last allow them to be used in this way. 

Researchers at the Lawrence Berkeley National Laboratory made tin electrodes by using layers of graphene to protect the normally fragile tin. These first tin electrodes are a sign that materials scientists have made a great deal of progress in using nanoscale structures to improve batteries.

 Power pillars: This battery electrode, shown in cross section under an electron microscope, consists of nanoscale tin pillars sandwiched between sheets of graphene.

Making battery electrodes from tin or silicon can boost the battery's overall energy storage. That's because such materials can take in more lithium during charging and recharging than carbon, which is normally used. But silicon and tin tend to be unstable as electrodes. Tin takes up so much lithium that it expands in volume by a factor of two to three during charging. "This forms cracks, and the tin leaks into the electrolyte and is lost," says Yuegang Zhang, a scientist at Lawrence Berkeley.

Zhang's clever solution is to layer the tin between sheets of graphene, single-atom-thick sheets of carbon mesh. Graphene is highly conductive, and while it's flexible, it's also the strongest material ever tested. 

The tin-graphene electrode consists of two layers of tin nanopillars sandwiched between three sheets of graphene. The pillars help the electrode remain stable: instead of fracturing, the tin expands and contracts during charging without breaking. The space between the pillars means there's plenty of room for the battery's electrolyte to move around, which ensures fast charging speeds. 

Zhang's group has made prototype batteries featuring these electrodes. The prototype tin-graphene batteries can charge up in about 10 minutes and store about 700 milliamp-hours per gram of charge. This storage capacity is maintained over 30 charge cycles. The batteries will ultimately need to hold their performance for hundreds of charge cycles. "The performance they have is quite reasonable, and this has a pretty clear application in existing batteries," says Yi Cui, associate professor of materials science at Stanford University. Cui was not involved with the work.

Several other research groups are working on promising battery materials that include nanoscale structures. Cui has founded a company called Amprius to commercialize another kind of battery anode that features silicon nanowires. The nanostructure of these wires also helps the fragile material remain stable as it takes up and releases lithium. Another group, led by Pulickel Ajayan at Rice, recently built a nanostructured battery that incorporates a tin electrode, in this case integrating the electrodes and the electrolyte on individual nanowires. Arrayed together, these nanowires could make long-lasting microbatteries for small devices such as sensors. 

Zhang is working to demonstrate the use of the nanopillar structure with other fragile electrode materials, including silicon. The process may add to the cost of battery production, but the performance gains could offset the potential additional cost. "People typically assume that a fancy nanoscale structure will cost more, but it may not," says Zhang.

By Katherine Bourzac
From Technology Review

New Process Could Make Canadian Oil Cheaper, Cleaner

New technology for extracting oil from oil sands could more than double the amount of oil that can be extracted from these abundant deposits. It could also reduce greenhouse-gas emissions from the process by up to 85 percent. The technology was developed by N-Solv, an Alberta-based consortium that recently received $10 million from the Canadian government to develop the technology.

 Oil cleanup: A solvent allows oil to flow out of the sand at the top of this simulated oil sands reservoir.

Canada's oil sands are a huge resource. They contain enough oil to supply the U.S. for decades. But they are made up of a tarry substance called bitumen, which requires large amounts of energy to extract from the ground and prepare for transport to a refinery. This fact has raised concerns about the impact of oil sands on climate change. The concerns have been heightened by plans to build a new pipeline for transporting crude oil from the sands to refineries in the United States. 

Most oil sands production currently involves digging up oily sand deposits near the surface and processing the sludgy material with heat and chemicals to free the oil and reduce its viscosity so it can flow through a pipeline. But 80 percent of oil sands are too deep for this approach. Getting at the deeper oil requires treating the bitumen underground so it can be pumped out through an oil well. The most common technique in new projects involves injecting the bitumen with steam underground. But producing the steam means burning natural gas, which emits carbon dioxide. And the oil that's pumped out is still too thick to flow through a pipeline, so it has to be partially refined, which emits still more greenhouse gases. 

N-Solv's process requires less energy because it uses a solvent rather than steam to free the oil, says Murray Smith, a member of N-Solv's board of directors. The solvent, such as propane, is heated to a relatively low temperature (about 50 °C) and injected into a bitumen deposit. The solvent breaks down the bitumen, allowing it to be pumped out along with the propane, which can be reused. The solvent approach requires less energy than heating, pumping, and recycling water for steam. And because the heaviest components of the bitumen remain underground, the oil that results from the solvent process needs to be refined less before it can be transported in a pipeline. 

Because the new process requires less energy, it should also be cheaper. Smith adds that the equipment needed for heating and reusing the propane is less expensive than technology for managing the large volumes of water used in the steam process. With conventional techniques, oil prices have to be above $50 to $60 per barrel—as they have been for several years—for oil sands to be economical. Smith says that with the solvent process, oil sands are still economical even if oil is $30 to $40 per barrel, close to what it was in the 1990s and early 2000s (in inflation-adjusted dollars). N-Solv says the lower costs will make it possible to economically extract more than twice as much oil from the oil sands compared to conventional technologies. 

The idea of using solvents to get at oil sands was proposed in the 1970s, but early experiments showed that the process couldn't produce oil quickly enough. Two things changed that, according to N-Solv. First, horizontal drilling technologies now make it possible to run a solvent injection well along the length of an oil sands deposit, increasing the area in contact with the solvent, thus increasing production. Second, N-Solv determined that even small amounts of methane—a by-product of using a solvent—could contaminate the propane and degrade its performance. So N-Solv introduced purification equipment to separate methane from the propane before it is reused. The separated methane can also be used to heat the propane, further reducing energy costs. 

Although N-Solv's technology could reduce carbon-dioxide emissions from production, most of the emissions associated with oil sands—as with any source of oil—come not from producing the oil, but from burning it in vehicles and furnaces. The technology's impact on climate change will depend on whether the process leads to increased oil production—if it does, it may actually result in increased net greenhouse-gas emissions, says David Keith, a chemical and petroleum engineering professor at the University of Calgary. 

So far, the process has been tested only in a lab. Now N-Solv will begin a pilot project that could produce 500 barrels of oil a day. The $60 million project, which is mostly funded by private sources, will determine whether the process can work on a larger scale. 

By Kevin Bullis
From Technology Review

Pure Nanotubes by the Kilo

An improved process for making large amounts of pure metallic carbon nanotubes could hold the key to overhauling the electrical power grid with more efficient transmission lines.

Researchers at Rice University plan to generate a large quantity of this material by the end of summer. They'll use these nanotubes to make long and highly conductive fibers that could be woven into more efficient electrical transmission lines.

 Amped up: This jumble of carbon nanotubes has tripled in volume after two runs through a growth process called amplification.

There are a few different classes of carbon nanotube, each with slightly different properties and different potential uses. Unfortunately, existing production methods result in a mixture of different nanotubes, with varying dimensions and wildly different electrical properties. Purely semiconducting nanotubes, useful for future integrated circuits, are in the mix with metallic nanotubes that could be used to make highly conductive wires. So nanotubes have to be separated by type, a slow and expensive process, says Andrew Barron, professor of chemistry and materials science at Rice.

"There is a subset of nanotubes that are the best conducting materials to be found, that don't lose any energy to heat," says Barron.

Barron is part of a group at Rice that wants to make something very large from these nanotubes: miles and miles of highly conductive electrical transmission lines for a more efficient energy grid, which will be important as the use of renewable energy grows. This was one vision of the late Rice professor Richard Smalley, who won the Nobel Prize in Chemistry for his codiscovery of fullerenes, a new type of carbon structure. The Rice researchers have made long, pure carbon nanotube fibers, but since they have been working from impure samples, these fibers are not as conductive as they could be.

Barron and his colleagues have now improved on a method for making pure nanotubes that they first developed in 2006. Called "amplification," it should eventually allow them to turn a nanogram of pure carbon nanotubes into a gram, then a kilogram, then a ton. They start by separating a small amount of pure metallic nanotubes from a mixture, and then attach a catalyst to the tip of each tube. They then put the nanotubes into a pressurized, temperature-controlled chamber and feed in a mixture of gases. Under these conditions, the nanotubes double in size, growing from the catalyst at the tip. The existing nanotube acts as a template that dictates the diameter, structure, and properties of the extra length of the nanotube. The nanotubes are then cut and the process is repeated.

Barron and colleagues first demonstrated amplification a few years ago, but it wasn't very efficient. In a paper published online in June in the journal Nano Letters, they described a combination of the right catalysts and growth conditions that would ensure that every single nanotube would be amplified. Previously they'd assumed these conditions should be identical to the ones used to make the starting batch of nanotubes, but it didn't work very well. Barron says they have now found the conditions to make amplification work.

The Rice researchers are using the amplification process to accumulate enough pure metallic nanotubes to make a fiber of the type that would be used to make an electrical transmission line. They've made long, conductive nanotube fibers in the past using a spinning process also developed at Rice, but they've had to use impure nanotubes to make any great length of the material.

Aaron Franklin, a researcher at IBM's Watson Research Center, says the new study probably doesn't "reveal the golden ticket for achieving high volumes of metallic-only tubes." The amplification process is still not producing very large quantities of the material, Franklin notes. 

While the Rice group continues to work on amplification, other researchers are exploring alternative ways of making pure nanotubes in quantity. Mark Hersam, a professor of chemistry at Northwestern University, developed what is now one of the most commonly used separation methods. He founded a company called NanoIntegris to sell pure nanotubes. He says ramping up production "is now essentially an industrial optimization exercise."

By Katherine Bourzac
From Technology Review