Meet the Nimble-Fingered Interface of the Future

Microsoft's Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010. Eight million devices were sold in the product's.....

Electronic Implant Dissolves in the Body

Researchers at the University of Illinois at Urbana-Champaign, Tufts University, and ......

Sorting Chip May Lead to Cell Phone-Sized Medical Labs

The device uses two beams of acoustic -- or sound -- waves to act as acoustic tweezers and sort a continuous flow of cells on a ....

TDK sees hard drive breakthrough in areal density

Perpendicular magnetic recording was an idea that languished for many years, says a TDK technology backgrounder, because the ....

Engineers invent new device that could increase Internet

The device uses the force generated by light to flop a mechanical switch of light on and off at a very high speed........


Cyrene Quiamco's Aeolus mobile phone breaks wind

Breaks wind barrier in phone technology. Damn you short headlines designed for ADD Internets.
Once upon a time there was a forum for product design. And on this forum, a graphic designer by the name of Cyrene Quiamco, say it soft and it's almost like praying, gave the tree loving, eco hounds of this world a sprig of hope.



Cyrene's Aeolus mobile phone harnesses the power of the wind so that you don't have to jack yourself buying a ridiculously expensive car adapter that ends up breaking the third time you try and wiggle it into place. Or, desperately clinging to those Samsung sponsored charging stations at airports which are built like French street urinals for gadgets.


Anyhow, back to Cyrene. The concept is delish, as you might say if you had any sense of style or design, but it is just a concept. No doubt, she will go on to greater things, perhaps eventually burning out somewhere in Cupertino.

The phone would have a back-up plan if the wind doesn't cooperate, solar power, and it would be made of renewable materials. Check out Cyrene's resume Jobs. All you need now is an iPad shaped like a sail and wheelies with each user install. Man, you'd rock it!

By Emory Kale
From tgdaily.com 

Team grows retina from stem cells

UC Irvine researchers have created a retina from human embryonic stem cells, the first time they've been used to create a three-dimensional tissue structure. The eight-layer, early stage retina could be the first step towards the development of transplant-ready retinas to treat eye disorders such as retinitis pigmentosa and macular degeneration.


"We made a complex structure consisting of many cell types," said study leader Hans Keirstead. "This is a major advance in our quest to treat retinal disease."

In previous studies on spinal cord injury, the Keirstead group came up with a method to direct human embryonic stem cells to become specific cell types, a process called differentiation. 

The team used the same technique to create the multiple cell types necessary for the retina. The greatest challenge, Keirstead said, was in the engineering. To mimic early stage retinal development, the researchers needed to build microscopic gradients for solutions in which to bathe the stem cells to initiate specific differentiation paths.

"Creating this complex tissue is a first for the stem cell field," Keirstead said. "Dr Gabriel Nistor in our group addressed a really interesting scientific problem with an engineering solution, showing that gradients of solutions can create complex stem cell-based tissues."

Retinal diseases are particularly damaging to sight. More than 10 million Americans suffer from macular degeneration, the leading cause of blindness in people over 55. About 100,000 have retinitis pigmentosa, a progressive, genetic disorder that usually appears in childhood.

“What’s so exciting with our discovery,” Keirstead said, “is that creating transplantable retinas from stem cells could help millions of people, and we are well on the way.”

The UCI researchers are testing the early-stage retinas in animal models to learn how much they improve vision. Positive results will lead to human clinical trials.

By Emma Woollacott
From tgdaily.com 

Supercomputer project aims to simulate the whole world

In a sort of giant Sim game, scientists are planning to use some of the world's largest supercomputers to simulate all life on Earth, including the financial system, economies and whole societies. The project is called Living Earth Simulator, and forms part of a huge EU research initiative named FuturIcT. It will unite various existing projects and expand upon them.


For example, ETH Zurich is simulating the travel activities of all 7.5 million inhabitants of Switzerland to forecast and cope with traffic congestion. 

Other researchers are mining huge amounts of financial data to detect dangerous bubbles in stock and housing markets, potential bankruptcy cascades in networks of companies, or similar vulnerabilities in other complex networks such as the internet.

Other existing simulations reveal common patterns behind the breakdown of social order in events as diverse as the war in former Yugoslavia, lootings after earthquakes or other natural disasters, or the recent violent demonstrations in Greece.

But there are even greater ambitions for the FuturIcT project, which also aims to bring in data from field studies and laboratory experiments, along with data from the internet and even games such as Second Life.
An ethics committee promises to make sure individual identities are kept secret.

The scientists behind the project foresee the development of crisis observatories and decision-support systems for politicians and business leaders. 

"Such observatories would detect advance warning signs of many different kinds of emerging problems, including large-scale congestion, financial instabilities, the spreading of diseases, environmental change, resource shortages and social conflicts," " says the project's Dirk Helbing. 

By Emma Woollacott 
From tgdaily.com

New Role of Molecule in the Health of Body's Back-Up Blood Circulation

This "back-up system" -- called the collateral circulation -- involves a small number of tiny specialized blood vessels, called collaterals, that can enlarge their diameters enough to carry significant flow and thus bypass a blockage.

This is a photo of inner thigh circulation of a newborn mouse, showing several collaterals interconnecting the main artery supplying the leg with arteries located in the thigh.

Researchers at the University of North Carolina at Chapel Hill School of Medicine have now discovered that the abundance of these vessels in a healthy individual and their growth or remodeling into "natural bypass vessels" depends on how much of a key signaling molecule -- called nitric oxide -- is present.

The study, conducted in animal models, suggests that nitric oxide not only is critical in maintaining the number of collateral vessels while individuals are healthy. It also is key in the amount of collateral vessel remodeling that occurs when obstructive disease strikes.

The research findings recently appeared online in the journal Circulation Research and will be published in the print edition on June 25th. They could one day enable researchers to predict people's risk for catastrophic stroke, myocardial infarction, or peripheral artery disease. Such knowledge could inform individuals with poor collateral capacity to adopt a lifestyle that can help reduce their chances of getting diseases that could further lower their number of collateral vessels.

"If you've got a good number of these natural bypass vessels, you have something of an 'insurance policy' that favors you suffering less severe consequences if you get atherosclerosis or thrombotic disease" said senior study author James E. Faber, Ph.D., professor of cell and molecular physiology at UNC.

"And if you were born with very few, the last thing you would want to do is subject yourself to environmental factors that might further cut down the number of these vessels." Faber also is a member of the McAllister Heart Institute at UNC. Earlier this year, his team reported that these vessels form early in life and that genetic background has a major impact on how many you end up with.

The factors that put people at risk for developing stroke, heart attack, or peripheral artery disease include the usual suspects -- smoking, diabetes, hypertension, high cholesterol, family history, age. But until recently, researchers didn't know what linked those risk factors together, when it comes to insufficiency of the collateral circulation.

Faber says studies have shown that all of these factors cause the endothelial cells that line our blood vessels to produce less nitric oxide, a "wonder molecule" that protects our vasculature from disease. Now, he says, his group's findings indicate that this molecule is also a critical factor maintaining the health of the collateral circulation.

So Faber and lead study author Xuming Dai, M.D., Ph.D., of UNC's departments of medicine and physiology, wondered whether collateral vessels would be lost if the levels of nitric oxide were suppressed. They counted the number of these vessels in the brains of mice genetically engineered to lack the enzyme -- called eNOS -- that makes most of the nitric oxide in blood vessel walls.

The researchers found that from the ages of three months to six months (equivalent to about twenty-one to forty-five years of age in humans) there was a 25 percent reduction in the number of collateral vessels in the mutant mice as compared to normal ones. They also saw the same percentage decrease in collateral vessels supplying the legs, where they were trying to model peripheral artery disease.

Next, the investigators wanted to know if a lack of nitric oxide would affect the way that existing collaterals respond to an obstruction in a main artery.

By blocking an artery in the legs of these genetically engineered mice, Faber and Dai were able to reroute circulation through the collateral vessels. Over a period of 2-3 weeks, the flow of detoured blood usually causes the little collaterals to enlarge their diameters by 3 to 4 fold through a process called collateral remodeling. But the researchers found that such remodeling was impaired in the mutant mice that produced less nitric oxide when compared to their normal counterparts.

In the first such experiment of its kind, Dai then succeeded in surgically removing these tiny collaterals from the mice and scanned their entire genomes for differences between the mutant and normal rodents that might explain this variation in remodeling.

"The only category of genes that was dramatically different between the two was the cell cycle control genes, genes that are involved in the proliferation of cells in the vascular wall -- a process that's required for collaterals to remodel," said Dai, a clinical cardiology fellow receiving basic science training in Faber's laboratory. "This is an important function of eNOS that had not been discovered before."

Faber says that possessing a variant form of the eNOS gene that results in loss of collaterals may be one more item on the list of risk factors for cardiovascular disease. There is already evidence that healthy people may vary up to ten-fold in the abundance of their collateral circulation, so the trick may be figuring out a way to upgrade that back-up plan for those who are lacking.

"If we can figure out how these unique vessels are made and maintained in healthy tissues, we hope we can then uncover how to induce them to be made with treatments in patients who don't have enough," Faber said.
The UNC research was funded by the National Heart, Lung and Blood Institute of the National Institutes of Health.

From sciencedaily.com

Western U.S. Grid Can Handle More Renewables

More than a third of the electricity in the western United States could come from wind and solar power without installing significant amounts of backup power. And most of this expansion of renewable energy could be done without installing new interstate transmission lines, according to a new study from the National Renewable Energy Laboratory (NREL) in Golden, CO. But the study says increasing the amount of renewables on the grid will require smart planning and cooperation between utilities.

Hauling wind: High-voltage transmission lines, like those shown here from the Bonneville Power Administration, are being called on to convey more renewable energy. As much as half of the power carried by Bonneville lines can come from wind turbines.

The NREL findings provide a strong counterargument to the idea that the existing power grid is insufficient to handle increasing amounts of renewable power. As California and other states require utilities to use renewable sources for significant fractions of their electricity, some experts have warned that measures to account for the variability of wind and solar power could be costly. At the extreme, they speculated, every megawatt of wind installed could require a megawatt of readily available conventional power in case the wind stopped blowing. But the NREL findings, like other recent studies, suggest that the costs could be minimal, especially in the West. 

"It's a lot lower than what people thought it was going to be," says Daniel Brooks, project manager for power delivery and utilization at the Electric Power Research Institute. Even if wind farms had to pay for the necessary grid upgrades and backup power themselves, they could still sell electricity at competitive rates, he says. 

NREL considered a scenario in which 30 percent of the total electricity produced in a year in western states comes from wind turbines and 5 percent comes from solar power--mostly from solar thermal plants that generate power by concentrating sunlight to produce high temperatures and steam. The researchers assumed the solar thermal plants would have some form of heat storage, although not all planned plants do. The study used detailed data about wind speeds, solar irradiance, and the operation of the electrical grid. GE Energy researchers commissioned by NREL then used the data to simulate the impact of various scenarios for wind and solar power use.

The researchers found that one way to keep the number of new backup power plants to a minimum is to expand the geographical area that renewable energy is gathered from, says Debra Lew, the NREL project manager in charge of the study. If utilities can call on wind farms and solar power from several states, rather than just from the local area, a drop in wind in one area is likely to be offset by an increase in wind elsewhere, and solar panels shaded by clouds in one area will be offset by others in sunny areas.

That makes it far less necessary to have conventional power plants standing by to make up for drops in power. The NREL study estimated that drawing only on local resources would increase variability on the grid by a factor of 50. That's "a huge increase," Lew says, too big for a local utility to balance using backup power and other resources. If you aggregate resources over several states, the increase is less than a factor of two.
Increasing cooperation among utilities can also give each better access to reserve generating capacity that can absorb this variability, Brooks says. A utility in Arizona could draw on coal plants in Wyoming to make up for a drop in solar power. 

There will still be times during the year when poor weather forecasts and high demand lead to drops in power that are too big to compensate with supplemental power plants. But according to NREL's models, this will happen rarely--just 89 hours out of the year, or 1 percent of the time. For such a small amount of time, it's not economical to build backup generators to make up the difference. But NREL found that a strategy called demand response could keep the grid from collapsing under the pressure. It's already been used in Texas, for example, to make up for sudden drops in wind power. 

In demand response, utilities send out signals to customers--typically businesses--asking them to cut their power use, in return for favorable electricity rates. That cuts demand enough to balance the grid. "It's effective and a lot less expensive" than building backup power plants, Lew says.

Distributing renewable energy among several states would not require extensive new interstate transmission lines at first, Lew says, because in the West there is already enough transmission available--if operators are given ready access to spare capacity. No new interstate transmission will be needed to accommodate wind farms that supply up to 20 percent of the electricity in a year and solar power that provides up to 3 percent, although there will need to be additional power lines to convey power locally from new renewable power plants, which might be in a remote area. 

This is in contrast to the eastern United States, where long-distance transmission would likely need to be built to carry power from the Midwest, where it is windy but electricity demand is low, to cities on the coast, where demand is high, she says. In the West, reaching 30 percent wind and 5 percent solar will require more transmission, but this could easily be built in the time it would take to reach such high levels of renewable energy. 

In addition to sharing power over large areas, for example, utilities could significantly reduce variability by changing their scheduling practices. Utilities now use algorithms to predict demand a day ahead and then schedule hourly changes to power generation. These changes on the hour, when a utility might cut down the amount of power a coal plant generates in anticipation of lower demand, actually introduce some instability into the grid. Scheduling changes more frequently could smooth this out, and allow more accurate coordination with renewable energy resources. Scheduling more often isn't a technical challenge, although it could require software upgrades--it's mostly a matter of changing practice, Brooks says. 

But utilities, notoriously slow to change, are showing signs of responsiveness in the face of renewable energy mandates, Brooks says. The need is urgent because some areas of the country already rely on relatively large amounts of wind power, at least part of the time. In some places in Texas, wind power, at times, can account for a quarter of the power on the grid, although it still only accounts for just over 6 percent of the total electricity in a year. In the Bonneville Power Administration in the Northwest, during some short periods of time, half of the power being generated comes from wind. "Utilities see more and more solar and wind coming on line and they're scared that they can't manage this on their own," Lew says. 

By Kevin Bullis 

New Color Screen Combines Beauty, Readability

If you're looking to buy an e-reader, you can choose between a beautiful but battery-draining liquid crystal display (LCD), like the one in Apple's new iPad, or a slow-switching but easy to read black-and-white one, like the one in Amazon's Kindle. At the Society for Information Display's annual conference this week in Seattle, Qualcomm MEMS Technologies is demonstrating prototypes of a screen that meets somewhere between these extremes. It shows video in color, and under full sunlight, but without draining the battery. The display will be in products by the end of the year.

Mirror vision: This prototype display uses mirrors to generate a color picture without a backlight.

The backlights in conventional LCDs consume the majority of the power in portable electronic devices. That's because a significant amount of that light is lost in polarizers and filters inside the device. These displays also require continuous power to maintain an image. "Batteries are evolving slowly, and there's increasing pressure to reduce this power consumption," says Brian Galley, senior director of product management at Qualcomm MEMS Technologies.

The iPad has raised the bar, says Paul Semenza, senior analyst at Display Search, by showing that LCDs are getting more energy-efficient while the glass they're built on is getting tougher and lighter. "Once things go full color, and go video, it's difficult to go back," he says.

Qualcomm's Mirasol display, which can play video in color, extends battery life by 51 percent relative to an LCD, according to a report by Pike Research.

Many companies are working to develop better reflective displays, which provide considerable power savings because they don't require a backlight and, in most cases, can maintain an image without needing additional power. E-Ink has been a leader in this area but has yet to come out with a color, video-capable display, though company representatives at the conference say one will be ready at the end of the year.

E-Ink pixels contain electrically charged black and white particles; when a small voltage is applied, one or the other moves to the surface to make the pixel reflect light or appear black. To make a color screen, filters are added to the top. Early versions of E-Ink's color screen appeared washed out because of light lost due to the filter, and were relatively slow to refresh, taking about a quarter of a second to refresh a page. Black-and-white E-Ink prototypes at the conference have higher resolution, faster-switching screens that looked crisp; company representatives say these improvements will lead to better color technology, too.

The pixels in Qualcomm's Mirasol displays can switch fast enough to show video, and don't use filters to generate color. These displays generate color by harnessing the interference effects that occur when light bounces off certain structures.

Each pixel is made up of rows of two-layered reflective surfaces separated by air, and acts as a tiny interference chamber. When ambient light hits each of these subpixels, some will reflect off the top surface, and some will pass through and be reflected off the bottom surface. The two waves constructively interfere with each other, combining to create light whose color is determined by the distance between the two surfaces--on the order of hundreds of nanometers. Within each pixel are three of these chambers, each a different depth, to create red, green, and blue. To turn the pixel off, a small voltage is applied to collapse the bottom reflector against the top. Once a subpixel is either on or off, it will stay there until power is applied again.

These Mirasol pixels can switch between on and off in 10 millionths of a second. "Because it's mechanical, it's very fast," says Galley. Touch-screen Mirasol prototypes at the conference showed video almost as color-saturated and rich as an LCD, and the response rate felt faster than an E-Ink screen. 

The company showed other prototype displays incorporating a front light, which had a crisper picture. Front lights are more energy-efficient than the backlights, but are tricky to engineer. Galley says the first product will likely incorporate a touch screen, and the front light will be added to future generations of products. Monochrome Mirasol displays are already in a few products in Asia with very small screens, including a Bluetooth headset and a cell phone.

"The battle in reflective displays is, who can get reasonable color and motion," says Semenza. "That's where Qualcomm has leapfrogged other companies." The interference-based displays may prove to have some cost advantages as well. Most displays must be built on a thin-film transistor backplane, which is one of the most expensive components; the MEMS system in the Mirasol doesn't require a transistor array. However, the real cost of the displays won't be known until they're manufactured in large quantities.

Qualcomm is currently manufacturing the devices at sizes of about a meter squared at a plant in Taiwan, but the company would not comment on their current manufacturing capacity. Galley says Qualcomm has addressed problems with earlier prototypes, but would not go into detail. In addition to the e-reader prototypes, company representatives are showing small static displays that demonstrate further improvements in brightness. Qualcomm is partnering with another manufacturer to develop e-readers using Mirasol displays.

By Katherine Bourzac 
From Technology Review

Mobile Data: A Gold Mine for Telcos

Cell phone companies are finding that they're sitting on a gold mine--in the form of the call records of their subscribers. 

Researchers in academia, and increasingly within the mobile industry, are working with large databases showing where and when calls and texts are made and received to reveal commuting habits, how far people travel for public events, and even significant social trends. 

 Call center: This network shows phone calls between around two million cell phone users in Belgium over six months; each dot represents a tightly connected group of people, and its color represents the language they speak. The Dutch-speaking (green) and French-speaking (red) communities are starkly divided, linked only by a smaller cluster representing users in Brussels.

With potential applications ranging from city planning to marketing, such studies could also provide a new source of revenue for the cell phone companies. "Because cell phones have become so ubiquitous, mining the data they generate can really revolutionize the study of human behavior," says Ramón Cáceres, a lead researcher at AT&T's research labs in Florham Park, NJ. 

If you were an AT&T subscriber and were near Los Angeles or New York between March 15 and May 15 last year, there's a 5 percent chance that your data was crunched by Cáceres and his colleagues in a study of the travel habits of the company's subscribers. The researchers amassed millions of call records from hundreds of thousands of users in 891 zip codes, covering every New York borough, 10 New Jersey counties, as well as Los Angeles, Orange, and Ventura counties in California.

The data set is a collection of call detail records, or CDRs--the standard feedstock of cell phone data mining. A CDR is generated for every voice or SMS connection. Among other things, it shows the origin and destination number, the type and duration of connection, and, most crucially, the unique ID of the cell tower a handset was connected to when a connection was made.

That let the AT&T team know the location of a phone to within a mile radius at the time each CDR was generated, making it possible to determine the distance traveled from home by each cell phone every day. The group found that, on average, people living in Manhattan travel 2.5 miles most days, compared to five miles in Los Angeles. "But we also found that when you look at the longest trips people make, people that live in New York go significantly further, 69 miles on a weekday compared to 29 in Los Angeles," Cáceres says.

Cáceres hopes to work with city planners, who would usually have to resort to expensive and limited surveys to gather such information. "This kind of data can help them decide how to invest resources, for example if they want to know where to build a new train or subway station," he says. The AT&T work was presented at a recent workshop in Cambridge, MA, earlier this month as part of the NetSci conference on network science.

For now, Cáceres's group is looking to collaborate rather than commercialize. But cell phone networks are thinking about monetizing their data, says Jean Bolot, a researcher at network operator Sprint. This means a "two-sided" business model where they not only serve end users but also make money through relationships with other businesses. "This is new in the telco space but not in other areas--look at Google, for example," he says. 

Since almost everyone has a cell phone, the scale of the data is immense compared to other sources. Mobility patterns might, for example, be used to adjust property or billboard advertising prices. "Just about every operator on the planet is probably thinking about this right now," says Bolot. 

Another study, presented by Francesco Calabrese, a research scientist at MIT, and colleagues correlated location traces from roughly a million cell phones in greater Boston with listings of public events such as baseball games and plays, showing how people traveled to attend these events. "We could partly predict where people will come from for future events," the team wrote in a report on their work, suggesting it could be possible to provide accurate traffic forecasts for special events.

The surge of research in this area has been enabled by the development of algorithms that can efficiently handle large networks consisting of millions of links, says Vincent Blondel, a professor of applied mathematics at Université Catholique de Louvain, near Brussels, who organized the Cambridge workshop. 

Blondel's research includes an analysis of connections between two million cell phone users in Belgium. It revealed that the French-speaking and Dutch-speaking populations of the country are barely connected by calls and texts. "This is interesting, since there are already discussions within Belgium about splitting the country in two," says Blondel.

Research in this area is typically focused on aggregate information and not individuals, but questions remain about how to protect user privacy, Blondel says. It is standard to remove the names and numbers from a CDR, but correlating locations and call timings with other databases could help identify individuals, he says. In the MIT study, for example, the team could infer the approximate home location of users by assuming it to be where a handset was most located between 10 p.m. and 7a.m., although they also lumped people together into groups by zip code.

"I feel the scientific community should take responsibility for finding out how to trade off having useful data and protecting privacy," says Blondel. He is investigating the effect of techniques like using approximate rather than exact location information, or blurring the exact time stamps of calls from a data set.

By Tom Simonite
From Technology Review

Scientist says he's infected with a computer virus

A scientist at the University of Reading says he's become the first person in the world to be infected by a computer virus. Dr Mark Gasson, from the School of Systems Engineering, contaminated a computer chip which was then implanted in his hand. He says it's no gimmick, but could have huge implications for health devices such as heart pacemakers and cochlear implants.


"Our research shows that implantable technology has developed to the point where implants are capable of communicating, storing and manipulating data," he said. 

"They are essentially mini computers. This means that, like mainstream computers, they can be infected by viruses and the technology will need to keep pace with this so that implants, including medical devices, can be safely used in the future."

A high-end Radio Frequency Identification (RFID) chip was implanted into Gasson's left hand last year, giving him secure access to his University building and his mobile phone. 

Once infected, the chip corrupted the main system used to communicate with it.
"I believe it is necessary to acknowledge that our next evolutionary step may well mean that we all become part machine as we look to enhance ourselves," says Gasson. 

"Indeed, we may find that there are significant social pressures to have implantable technologies, either because it becomes as much of a social norm as say mobile phones, or because we'll be disadvantaged if we do not. However we must be mindful of the new threats this step brings."

From tgdaily.com

High Level of Bacteria Found in Bottled Water in Canada

"Despite having the cleanest tap water a large number of urban Canadians are switching over to bottled water for their daily hydration requirements. Unsurprisingly, the consumer assumes that since bottled water carries a price tag, it is purer and safer than most tap water," says Sonish Azam, a researcher on the study.

A recent Canadian study found heterotrophic bacteria counts, in more than 70 percent of bottled water samples, exceeded the recommended limits specified by the United States Pharmacopeia. 

Regulatory bodies such as Food and Drug Administration (FDA), Environmental Protection Agency (EPA) and Health Canada have not set a limit for the heterotrophic bacteria counts in bottled drinking water. However, according to the USP not more than 500 colony forming units (cfu) per milliliter should be present in drinking water.

The study was initiated in response to a Ccrest employee's complaint of fowl taste and sickness after consumption of bottled water at the company. Azam and her colleagues Ali Khamessan and Massimo Marino randomly purchased several brands of bottled water from a local marketplace and subjected them to microbiological analysis. They discovered more than 70 percent of famous brands tested did not meet the USP specifications for drinking water.

"Heterotrophic bacteria counts in some of the bottles were found to be in revolting figures of one hundred times more than the permitted limit," says Azam. In comparison the average microbial count for different tap water samples was 170 cfu/mL.

Azam stresses that these bacteria most likely do not cause disease and they have not confirmed the presence of disease-causing bacteria, but the high levels of bacteria in bottled water could pose a risk for vulnerable populations such as pregnant women, infants, immunocompromised patients and the elderly.

"Bottled water is not expected to be free from microorganisms but the cfu observed in this study is surprisingly very high. Therefore, it is strongly recommended to establish a limit for the heterotrophic bacteria count as well as to identify the nature of microorganisms present in the bottled water," says Azam.

From sciencedaily.com

Software Works Out What's Troubling a PC

Even a brand new computer can be slowed down by a particular combination of hardware or software, and it's difficult to figure out what's wrong. Soluto, a startup based in Tel Aviv, Israel, that launched this week, hopes to offer users advice on how to avoid this kind of slowdown.


Soluto's software runs in the background on a PC and is designed to detect problems that slow a machine down as well as solutions that speed it up again. The idea is to collect this information and use it to recommend software and hardware fixes to other users. "We know when you're frustrated, we know what causes it, and if you do something smart to your PC, we learn from it," says Roee Adler, chief product officer for the company.

Soluto's first product, launched this week at the TechCrunch Disrupt conference in New York City, is designed to address problems that slow down a computer as it starts. The free version of the software simply observes what's happening on the machine and recommends fixes to speed up the process. The company also sells a premium version that performs fixes automatically. Soluto hopes that this product will appeal to less tech-savvy consumers and to small businesses and enterprises. This product collects a lot more information about what's happening on a user's PC, which it stores in Soluto's "PC Genome" database. 

Adler explains that the company has spent two and a half years working on algorithms that can recognize when a user is frustrated with his or her PC. The software learns from how people deal with these problems in order to recommend changes that can help other users. Soluto's software communicates with a Web server that stores statistical data that's used to interpret what's happening on the machine. 

The software observes events such as repeated mouse clicks--which may indicate that the user is not getting a response--and sudden shifts in how the PC is using resources, which may mean that programs are fighting over resources. Users can also manually inform Soluto that the computer is experiencing difficulties, and this causes the system to pay particular attention to recent events on the machine.

A common scenario, Adler says, is a user working in Excel when the computer suddenly slows down. This might be because the user's antivirus software has suddenly started updating itself, initiating a battle for resources. Once the software sees that there's a problem, it waits to see if the user has figured out a solution. If the problem happens less frequently, Soluto's software attempts to identify changes to the system--such as the removal of software or changes to settings--that could correlate with the improvement. Once data is gathered from a large enough number of users, the system can recommend actions that could help other users.
Adler notes that the statistical aspect is important here--in many cases, problems are caused by how particular versions of a piece of software interact with particular hardware and a particular version of Windows. By collecting that data, Soluto can show users what's worked for others who run the same systems. 

The company says it is conscious of privacy concerns. Adler says the software collects no information about the user--no registration details are required to download the program, and Soluto gathers no information about the users' demographics. Soluto has so far raised $7.8 million in two rounds of funding.
The judges at the Disrupt conference liked Soluto but expressed concerns about how it would distribute its products. The logical partnership is with PC vendors, said Chi-Hua Chien, a partner at Kleiner, Perkins, Caufield, and Byers, which has not invested in Soluto. However, he added that Soluto probably would not be able to make such deals because many PC vendors have deals to load their machines with the software that may slow them down.

Shmuel Chafets, director of business development at Giza Venture Capital, one of Soluto's investors, says the investment appealed to him because of its technical sophistication and broad consumer appeal. "If you have ever owned a PC, if you've used it for even an hour, you understand what Soluto does," Chafets says,

By Erica Naone 
From Technology Review

New Drugs for Macular DegenerationNew Drugs for Macular Degeneration

In 2005, two genetic studies of people with age-related macular degeneration (AMD)--the most common cause of blindness in people older than 65--made a surprising discovery. Research showed that defects in a gene that is an important regulator of parts of the immune system significantly increased risk of the disease. Scientists have since identified variants in several related genes that also boost risk, and which collectively account for about 50 to 60 percent of the heritability of the disorder.

Eye colors: Drusen, the yellow flecks in this image of the retina, are common in people with age-related macular degeneration. These flecks are made up of proteins involved in the part of the immune system called the complement system, which has also been implicated in the disease by genetic studies.

At the same time that researchers identified the harmful variation linked to AMD, Gregory Hageman, now at the University of Utah, identified a protective variant found in about 20 percent of the population. "That form is so incredibly protective that people with two copies are almost guaranteed not to develop the disease," he says. Hageman founded Optherion, a startup based in New Haven, CT, and investigated how to translate the findings into new treatments. Optherion is now producing large quantities of an engineered version of the protein and doing preclinical safety and effectiveness testing--for example, examining whether the treatment can reduce ocular deposits in mice that lack the protein, says Colin Foster, Optherion's president. He declined to estimate when the company will begin clinical trials of the drug. 

Scientists hope that these developments will prove to be an example of the benefits that can arise from a type of genetic study called genome-wide association. The genome-wide studies of macular degeneration were among the first and perhaps the biggest success for the approach, which employs specially designed chips dotted with markers to cheaply detect hundreds of thousands of the most common variations in the human genome. While these chips have allowed scientists to cheaply scan the genomes of many patients and healthy controls, the approach has come under increasing scrutiny in the last couple of years. Even huge studies of thousands of people have failed to identify the majority of the heritability of common diseases, such as type 2 diabetes or Alzheimer's disease. But David Altshuler, a physician and geneticist at the Broad Institute, in Cambridge, MA, and one of the primary architects of these studies, argues that this is not the best way to measure their success. Rather than using the results to design diagnostics to predict an individual risk for developing a disease, we should use genome-wide association studies to identify new drug targets, he says. And he points toward macular degeneration as an example.

Prior to the 2005 studies, few people studying macular degeneration suspected a major role for the complement immune system, which helps to clear pathogens from the body. The link between the complement factor H gene, which is a major inhibitor of the complement immune system, and other genes to macular degeneration has allowed scientists to explore the pathology of the disease in greater molecular detail. Mice lacking the protein altogether develop kidney and eye problems. (Mice don't have maculas, so it's impossible to accurately mimic the disease in rodents.) Human cells expressing the mutated version of the protein have altered immune function. 

Hageman, who has since left Optherion, is exploring the power of the protective protein in novel ways. Because most complement factor H protein is made in the liver, his team is examining macular degeneration in people who undergo liver transplants. "We have seen cases where people who received a liver from someone with the risk form of the protein have developed macular degeneration quickly," he says. "And we have seen a couple of cases where someone had AMD and progression was halted after receiving a liver from someone with the protective form." But he cautions that these cases are anecdotal; researchers need to examine many more patients to see if the effect is statistically significant.

While it's not exactly clear how alterations in the complement factor H gene boost risk for macular degeneration, scientists theorize that the mutant protein can no longer adequately control the complement immune system, perhaps triggering it to attack healthy cells rather than the pathogens it was designed to fight. "Chronic activation of complement and chronic inability to control it probably helps to explain the age-relatedness of the disease," says Hageman.

Anand Swaroop, a researcher at the National Eye Institute, in Bethesda, MD, points out that while the complement system is important in AMD, genome-wide association studies have implicated other genes and pathways as well. "We know that in addition to the complement system, there are three or four other pathways involved, as well as environmental factors," says Swaroop. "Those variants are clearly as important and we have no idea what they do. I think the ultimate cure will come from targeting multiple pathways."

By Emily Singer 
From Technology Review

Developers Reinvent the Music Store

A new partnership between "music intelligence" platform Echonest and streaming music service Play.me lets developers create apps that offer new ways to find music and stream whole tracks for free. The deal can be used to create an app that streams up to five hours a week of music from a catalog of three million tracks; once the weekly streaming limit is reached, users have to pay up $10 a month for unlimited streaming--a pricing scheme identical to competitor Spotify.

Music wheel: The Slice app for Android was created by students at Olin College of Engineering. It uses data from Echonest and Play.me to explore music visually.

Music Explorer FX is just one of more than 70 that already take advantage of Echonest's application programming interface (API), which feeds data to apps from a vast catalog of artists and tracks. The API maps connections between similar songs and artists, is available to any developer who signs up for an Echonest API key. Through the deal with Play.me, Echonest's service can be used to stream tracks from labels including Sony, EMI, and the Orchard. Overnight, online music stalwarts Pandora, Last.fm, and Grooveshark find that they have dozens of competitors built by small teams and even individual coders. "App developers are, we believe, the future of this space, says Echonest's CEO, Jim Lucchese. "They are the future of [music] retail."

Developers don't sell music under the new deal, but they get a cut whenever a user signs up for the Play.me streaming service through their app. This makes them, in essence, sales affiliates for Play.me, and by extension for all the labels whose music it aggregates.

Developers have always been able to cut deals with record labels individually, but that process was prohibitively expensive and time-consuming. By obtaining blanket rights to stream all the music in its catalog, Play.me has eliminated that headache. "From a lawyer standpoint, an API is a very efficient contract," says Lucchese. "It's as if you said, 'Here's my stuff, and here are the rules, and as long as you play by the rules, we're good to go.' " Using the API that allows access to Echonest's database, five students at Olin College of Engineering were able to put together, in a single semester, a mobile app that explores unexpected connections between bands. The result is the Slice app for Android. The app uses a spinning, color-coded wheel to make it easy to hop from one artist to the next. The goal is to create something more dynamic than traditional Internet radio offerings by giving users a different way to explore the relatedness of artists and genres.

Replacing individual contracts between developers and labels with a blanket contract and an API is equally advantageous for record labels, Lucchese says. "[Play.me and its labels] basically get an outsourced app development team and a powerful affiliate network of cool apps."

But the complicated tangle of rights attached to the music libraries owned by major labels means there are limits to the kinds of apps that developers can build with Play.me. For example, if a developer created a piece of software that syncs a track streamed from Play.me to a video--as in a Guitar Hero-style rhythm game--it would require a totally separate set of licenses, including a "sync license."

Even with these limitations in place, there are plenty of opportunities for developers.
Eventually, says Lucchese, Play.me, Echonest, and all of its affiliated apps could have access to the catalogs of almost every label. "There are probably more than 20 [online] services that have content from all four of the major labels. I'm sure Play.me is going to get there--it just takes time to license them all," says Lucchese.

By Christopher Mims 
From Technology Review

To Attack H1N1, Other Flu Viruses, Gold Nanorods Deliver Potent Payload

The work is published in the current issue of the Proceedings of the National Academy of Sciences.
"This joint research by UB and the CDC has the potential to usher in a new generation of antiviral medicines to aggressively treat a broad range of infectious diseases, from H1N1 to avian flu and perhaps Ebola, that are becoming increasingly resistant to the medicines used against them," says UB team leader Paras Prasad, PhD, executive director of the UB Institute for Lasers, Photonics and Biophotonics (ILPB) and SUNY Distinguished Professor in the departments of Chemistry, Physics, Electrical Engineering and Medicine.

These human bronchial epithelial cells have been transfected with nanoplexes, developed by scientists at UB and CDC, that are uniformly distributed surrounding the cell nuclei.

The collaborative work between UB and CDC came together through the work of Krishnan Chakravarthy, an MD/PhD candidate at UB and the paper's first author. This research constitutes part of his doctoral degree work that focused on host response to influenza infection and novel drug delivery strategies.

The paper describes the single strand RNA molecule, which prompts a strong immune response against the influenza virus by ramping up the host's cellular production of interferons, proteins that inhibit viral replication.
But, like most RNA molecules, they are unstable when delivered into cells. The gold nanorods produced at UB act as an efficient vehicle to deliver into cells the powerful immune activator molecule.

"It all boils down to how we can deliver the immune activator," says Suryaprakesh Sambhara, DVM, PhD, in CDC's Influenza Division and a co-author on the paper. "The UB researchers had an excellent delivery system. Dr. Prasad and his team are well-known for their contributions to nanoparticle delivery systems."
A key advantage is gold's biocompatibility.

"The gold nanorods protect the RNA from degrading once inside cells, while allowing for more selected targeting of cells," said co-author Paul R. Knight III, MD, Chakravarthy's thesis advisor; professor of anesthesiology, microbiology and infectious diseases in the UB School of Medicine and Biomedical Sciences; and director of its MD/PhD program.

"This work demonstrates that the modulation of host response is going to be critical to the next generation of anti-viral therapies," Chakravarthy explains. "The novelty of this approach is that most of these kinds of RNA viruses share a common host-response immune pathway; that is what we have targeted with our nanoparticle therapy. By enhancing the host immune response, we avoid the difficulty of ongoing viral resistance generated through mutations."

Diseases that could be effectively targeted with this new approach include any viruses that are susceptible to the innate immune response that type 1 interferons trigger, Prasad notes.

Based on these in vitro results, the UB and CDC researchers are beginning animal studies.
"This collaboration has been extraordinary as two disparate research groups at UB and a third at the CDC have managed to maintain progress toward a common goal: treatment of influenza," says co-author Adela Bonoiu, PhD, UB research assistant professor at ILPB.

Important funding for the UB institute portion of the research was provided by the John R. Oishei Foundation, which helped pave the way for new stimulus funding UB received recently from the National Institutes of Health to further develop this strategy. The goal is to work toward an Investigational New Drug filing with the FDA.

Additional funding was provided by the NIH, the Air Force Office of Scientific Research and the National Vaccine Program Office of the U.S. Department of Health and Human Services.

Co-authors are Earl J. Bergey, PhD, UB research associate professor of chemistry; Hong Ding, PhD, postdoctoral associate, and Rui Hu, formerly a visiting researcher of UB's ILPB, and William Davis, Priya Ranjan, J. Bowzard and Jacqueline M. Katz of the Influenza Division of the CDC

From sciencedaily.com

Nuclear Reactor Aims for Self-Sustaining Fusion

In a few years, an experimental nuclear fusion reactor near Moscow could be the first to yield a self-sustaining fusion reaction. If the Italian-Russian project is successful, it would be a key milestone for fusion power.

Fusion power: Part of a plasma chamber from an earlier prototype of the planned fusion reactor.

The proposed reactor is based on a design developed by Bruno Coppi, a professor of physics at MIT, and principle investigator on the reactor project with Italy's National Agency for New Technologies, Energy and the Environment. Three similar reactors based on the same design have already been built at MIT. Italian and Russian physicists plan to meet on May 24 to chart a course for the new reactor, called Ignitor, in the first such meeting since the two countries agreed to join forces on the project in April.

Ignitor is a tokamak reactor, a doughnut-shaped device that uses powerful magnetic fields to produce fusion by squeezing superheated plasma of hydrogen isotopes. As an electric current and high-frequency radio waves pass through the plasma, heating it to extreme temperatures, the surrounding electromagnetic field confines the plasma under high pressure. The combined pressure and heat causes the hydrogen nuclei to fuse together to form helium in a process that releases tremendous amounts of heat. In a fully functional fusion reactor, this heat would be used to power an electricity-generating turbine.

A much larger, far more complex tokamak fusion reactor--the International Thermonuclear Experimental Reactor (ITER)--is planned for construction in Saint-Paul-lez-Durance, France. ITER, which will be completed in 2019 and ready for full-scale testing in 2026, will be closer to a functioning fusion generator but will not be designed to produce a self-sustaining fusion reaction. Ignitor will be a sixth the size of ITER and will test the conditions needed to produce a self-sustaining reaction. 

"Ignitor will give us a quick look at how burning plasma behaves, and that could inform how we proceed with ITER and other reactors," says Roscoe White, a distinguished research fellow at the Princeton Plasma Physics Laboratory.

But Ignitor will only test one key aspect of fusion. "It will give us information that is important, but it won't give us all the information we need and certainly doesn't replace ITER," Steven Cowley, director of the Culham Centre for Fusion Energy in Oxfordshire, U.K. "It's a demonstration that you can create ignition, but it's not really a pathway to a reactor."

Unlike ITER, Ignitor doesn't include many of the components that a real reactor would require. For example one crucial missing part is the "breeder blanket," which contains lithium and sits inside the reactor's magnetic coils, providing a continuous supply of tritium--one of two isotopes fused in the reaction. Ignitor's design is so compact that there is no room for a test blanket inside its coils.

Another limitation of Ignitor is the fact that its high electromagnetic field causes a significant reduction in the conductivity of most superconducting materials. To get around this, Ignitor relies primarily on conventional copper coils to create its magnetic field. But these coils can only operate for short bursts before they overheat. As a result, Ignitor can only sustain ignition for bursts of four seconds. ITER, which relies on superconducting coils and also draws on a significantly larger volume of plasma, is designed to maintain its peak output for 400 seconds. 

By Phil McKenna
From Technology Review

An Invisible Touch for Mobile Devices

Today, the way to interact with a mobile phone is by tapping its keypad or screen with your fingers. But researchers are exploring ways to use mobile devices that would be far less limited.

Imagine this: A person (top) draws a curved line with his finger, and the gesture is captured by a wearable camera (bottom). The line is transferred to a mobile device, which sends it to a recipient’s screen for display.


Patrick Baudisch, professor of computer science at the Hasso Plattner Institute in Postdam, Germany, and his research student, Sean Gustafson, are developing a prototype interface for mobile phones that requires no touch screen, keyboard, or any other physical input device. A small video recorder and microprocessor attached to a person's clothing can capture and analyze their hand gestures, sending an outline of each gesture to a computer display. 

The idea is that a person could use an "imaginary interface" to augment a phone conversation by tracing shapes with their fingers in the air. Baudisch and Gustafson have built a prototype device in which the camera is about the size of a large broach, but they predict that within a few years, components will have shrunk, allowing for a much smaller system. 

The idea of interacting with computers through hand gestures is nothing new. Sony already sells EyeToy, a video camera and software that capture gestures for its PlayStation game consoles; Microsoft has developed a more sophisticated gesture-sensing system, called Project Natal, for the Xbox 360 games console. And a gesture-based research project called SixthSense, developed by Pattie Maes, a professor at MIT, and her student Pranav Mistry uses a wearable camera to record a person's gestures and a small projector to create an ad-hoc display on any surface.

Baudisch and Gustafson say their system is simpler than SixthSense, requiring fewer components, which should make it cheaper. A person "opens up" the interface by making an "L" shape with her left or right hand. This creates a two dimensional spatial surface, a boundary for the forthcoming finger traces. Baudisch says that a person could use this space to clarify spatial situations, such as how to get from one place to another. "Users start drawing in midair," he says. "There is no setup effort here, no need to whip out a mobile device or stylus." The researchers also found that users were even able to go back to an imaginary sketch to extend or annotate it, thanks to their visual memory

A paper detailing the setup and user studies will be presented at the 2010 symposium on User Interface Software and Technology in New York in October.

Andy Wilson, a senior researcher at Microsoft who led the development of Surface, an experimental touch- screen table, says the work could be a sign of things to come. "I think it's quite interesting in the sense that it really is the ultimate in thinking about when devices shrink down to nothing--when you don't even have a display," he says. 

Wilson notes that the interface draws on the fact that people naturally use their hands to explain spatial ideas. "That's a quite powerful concept, and it hasn't been explored," he says. "I think they're onto something."

By Kate Greene 
From Technology Review

Taming Tinnitus with Electrical Stimulation

Electrically stimulating the vagus nerve, which connects the brain and the visceral organs, could help temper the phantom sounds that plague tinnitus sufferers. Researchers from Microtransponder, a Dallas-based startup developing wireless stimulation technology, reported at a neurotechnology conference in Boston this week that the approach works in animals with auditory damage that mimics the disorder. The company is adapting its neurostimulation technology, currently being developed for chronic pain, to target the vagus nerve. 

Tinnitus, the false perception of ringing or other sounds in the ear, affects millions of people worldwide. Most often associated with hearing loss, it has become an especially common problem in soldiers exposed to loud blasts. The severity of the disorder varies widely, from relatively benign to debilitating, and the few existing treatments tend to mask the intrusive sound rather than eliminate it.

While it's unclear exactly what causes tinnitus, research suggests it arises from the brain's attempt to compensate for hearing loss. Damage to the inner ear, which translates sound vibrations into neural signals for the brain, results in less input to the brain's auditory pathways. The brain appears to try to make up for this loss of input by increasing activity, which may in turn result in phantom sounds. 

Michael Kilgard, a neuroscientist at the University of Texas, aims to reverse this maladaptive reorganization using a combination of electrical stimulation and sound. Kilgard has previously shown that stimulating part of the brain called the nucleus basalis while playing a particular tone triggers the auditory cortex to reorganize to become hyper-responsive to that tone. To treat tinnitus, the idea is to stimulate this area while playing all sound frequencies except the one corresponding to a patient's phantom sound, thus signaling to the brain to become more responsive to all these other frequencies. If successful, this would rebalance the auditory cortex.
Rather than targeting the brain directly in humans, Kilgard turned to the vagus nerve, part of the nervous system that connects the stomach, liver, and other organs to the brain. Implanted devices that stimulate the vagus nerve are currently approved to treat depression and epilepsy and are being tested for other disorders.
Researchers plan to test the concept in people with tinnitus in upcoming clinical trials in Belgium. Kilgard says the researchers will use simple electrodes, which are implanted at the neck and stimulated with an external device. While the exact parameters are still to be determined, patients will undergo treatment for half an hour to an hour each day, for days or weeks. Unlike vagus nerve stimulation for epilepsy, which involves chronic stimulation, treatment for tinnitus will likely be for a limited period of time, researchers say.

In conjunction with these clinical tests, Microtransponder is modifying its existing technology for tinnitus. Unlike other stimulation devices, Microtransponder's system is wireless and has no batteries. The implanted portion consists of small electrodes and a small coil. An external battery-powered coil worn like a cuff on the arm or leg powers the device. "The idea would be to inject the wireless device and then put a coil around the neck to activate it [during a treatment session]," says Kilgard. "If the tinnitus comes back five years later, the device is still there and you can do the treatment again.

Harvard's Melcher says the approach is very interesting, though "whether it works is an open question." She points out that "we are still trying to sort out what aspects of brain plasticity are involved in tinnitus. There may be different kinds of tinnitus, with different types of brain activity giving rise to the perception of sounds that aren't there." All of these may require different treatments.

Rather than targeting the brain directly in humans, Kilgard turned to the vagus nerve, part of the nervous system that connects the stomach, liver, and other organs to the brain. Implanted devices that stimulate the vagus nerve are currently approved to treat depression and epilepsy and are being tested for other disorders.
Researchers plan to test the concept in people with tinnitus in upcoming clinical trials in Belgium. Kilgard says the researchers will use simple electrodes, which are implanted at the neck and stimulated with an external device. While the exact parameters are still to be determined, patients will undergo treatment for half an hour to an hour each day, for days or weeks. Unlike vagus nerve stimulation for epilepsy, which involves chronic stimulation, treatment for tinnitus will likely be for a limited period of time, researchers say.

In conjunction with these clinical tests, Microtransponder is modifying its existing technology for tinnitus. Unlike other stimulation devices, Microtransponder's system is wireless and has no batteries. The implanted portion consists of small electrodes and a small coil. An external battery-powered coil worn like a cuff on the arm or leg powers the device. "The idea would be to inject the wireless device and then put a coil around the neck to activate it [during a treatment session]," says Kilgard. "If the tinnitus comes back five years later, the device is still there and you can do the treatment again.

Harvard's Melcher says the approach is very interesting, though "whether it works is an open question." She points out that "we are still trying to sort out what aspects of brain plasticity are involved in tinnitus. There may be different kinds of tinnitus, with different types of brain activity giving rise to the perception of sounds that aren't there." All of these may require different treatments.

By Emily Singer 
From Technology Review

Invention Regulates Nerve Cells Electronically

The invention, which opens new avenues for controlling chemical signals, is being published in the coming issue of the scientific journal Proceedings of the National Academy of Sciences. The authors are Klas Tybrandt and Magnus Berggren of Linköping University, who developed the invention, and Karin Larsson and Agneta Richter-Dahlfors at the Karolinska Institute, who have used it in experiments with cultivated nerve cells.



The four scientists work at the OBOE Research Center, which is dedicated to the study and regulation of processes in living cells and tissue through the use of organic electronics.

Previously use has been made of nano-canals and nano-pores to actively control the concentration and transport of ions. But such components are difficult to produce and moreover function poorly when the salt content is high, which is a precondition in interaction with biological systems.

"To get around these problems, we exploited the similarity between ion-selective membranes -- plastics that only conduct ions of one charge -- and doped semiconductors, such as silicon. It was previously known that it is possible to produce diodes from such membranes. We took it a step further by joining two ion diodes into a transistor," says Klas Tybrandt, a doctoral candidate in organic electronics.

When an ion transistor was connected to cultivated nerve cells, it could be used to control the supply of the signal substance acetylcholin locally to the cells. The successful result demonstrates both that the component functions together with biological systems and that even tiny charged biomolecules can be transported without difficulty.

"Since the ion transistor is made of plastic, it can be integrated with other components we are developing. This means we can make use of inexpensive printing processes on flexible materials. We believe ion transistors will play a major role in various applications, such as the controlled delivery of drugs, lab-on-a-chip and sensors," says Magnus Berggren, Önnesjö professor of organic electronics.

The research center OBOE (organic bioelectronics) is funded by the Foundation for Strategic Research.

From sciencedaily.com

Gesture-Based Computing on the Cheap: Multicolored Gloves Making Minority Report-Style Interfaces More Accessible

Academic and industry labs have developed a host of prototype gesture interfaces, ranging from room-sized systems with multiple cameras to detectors built into laptops' screens. But MIT researchers have developed a system that could make gestural interfaces much more practical. Aside from a standard webcam, like those found in many new computers, the system uses only a single piece of hardware: a multicolored Lycra glove that could be manufactured for about a dollar.

 The hardware for a new gesture-based computing system consists of nothing more than an ordinary webcam and a pair of brightly colored lycra gloves

Other prototypes of low-cost gestural interfaces have used reflective or colored tape attached to the fingertips, but "that's 2-D information," says Robert Wang, a graduate student in the Computer Science and Artificial Intelligence Laboratory who developed the new system together with Jovan Popović, an associate professor of electrical engineering and computer science. "You're only getting the fingertips; you don't even know which fingertip [the tape] is corresponding to." Wang and Popović's system, by contrast, can translate gestures made with a gloved hand into the corresponding gestures of a 3-D model of the hand on screen, with almost no lag time. "This actually gets the 3-D configuration of your hand and your fingers," Wang says. "We get how your fingers are flexing."

The most obvious application of the technology, Wang says, would be in video games: Gamers navigating a virtual world could pick up and wield objects simply by using hand gestures. But Wang also imagines that engineers and designers could use the system to more easily and intuitively manipulate 3-D models of commercial products or large civic structures.

Patchwork approach
The glove went through a series of designs, with dots and patches of different shapes and colors, but the current version is covered with 20 irregularly shaped patches that use 10 different colors. The number of colors had to be restricted so that the system could reliably distinguish the colors from each other, and from those of background objects, under a range of different lighting conditions. The arrangement and shapes of the patches was chosen so that the front and back of the hand would be distinct but also so that collisions of similar-colored patches would be rare. For instance, Wang explains, the colors on the tips of the fingers could be repeated on the back of the hand, but not on the front, since the fingers would frequently be flexing and closing in front of the palm.

Technically, the other key to the system is a new algorithm for rapidly looking up visual data in a database, which Wang says was inspired by the recent work of Antonio Torralba, the Esther and Harold E. Edgerton Associate Professor of Electrical Engineering and Computer Science in MIT's Department of Electrical Engineering and Computer Science and a member of CSAIL. Once a webcam has captured an image of the glove, Wang's software crops out the background, so that the glove alone is superimposed upon a white background. Then the software drastically reduces the resolution of the cropped image, to only 40 pixels by 40 pixels. Finally, it searches through a database containing myriad 40-by-40 digital models of a hand, clad in the distinctive glove, in a range of different positions. Once it's found a match, it simply looks up the corresponding hand position. Since the system doesn't have to calculate the relative positions of the fingers, palm, and back of the hand on the fly, it's able to provide an answer in a fraction of a second.

Of course, a database of 40-by-40 color images takes up a large amount of memory -- several hundred megabytes, Wang says. But today, a run-of-the-mill desktop computer has four gigabytes -- or 4,000 megabytes -- of high-speed RAM memory. And that number is only going to increase, Wang says.

Changing the game
"People have tried to do hand tracking in the past," says Paul Kry, an assistant professor at the McGill University School of Computer Science. "It's a horribly complex problem. I can't say that there's any work in purely vision-based hand tracking that stands out as being successful, although many people have tried. It's sort of changing the game a bit to say, 'Hey, okay, I'll just add a little bit of information'" -- the color of the patches -- "'and I can go a lot farther than these purely vision-based techniques.'" Kry particularly likes the ease with which Wang and Popović's system can be calibrated to new users. Since the glove is made from stretchy Lycra, it can change size significantly from one user to the next; but in order to gauge the glove's distance from the camera, the system has to have a good sense of its size. To calibrate the system, the user simply places an 8.5-by-11-inch piece of paper on a flat surface in front of the webcam, presses his or her hand against it, and in about three seconds, the system is calibrated.

Wang initially presented the glove-tracking system at last year's Siggraph, the premier conference on computer graphics. But at the time, he says, the system took nearly a half-hour to calibrate, and it didn't work nearly as well in environments with a lot of light. Now that the glove tracking is working well, however, he's expanding on the idea, with the design of similarly patterned shirts that can be used to capture information about whole-body motion. Such systems are already commonly used to evaluate athletes' form or to convert actors' live performances into digital animations, but a system based on Wang and Popović's technique could prove dramatically cheaper and easier to use.

From sciencedaily.com