Computers Get Help from the Human Brain

Most brain-computer interfaces are designed to help disabled people communicate or move around. A new project is using this type of interface to help computers perform tasks they can't manage on their own. In experiments, researchers used the interface to sort through satellite images for surface-to-air missiles faster than any machine or human analyst could manage alone.

 Search me: Researchers at Columbia University use signals from an electroencephalogram (EEG) device (worn) to help search rapidly through images.

"With Google, you have to type in words to describe what you're interested in," says Paul Sajda, an associate professor at Columbia University. "But let's say I'm interested in something 'funny looking.' " 

Sajda explains that computers struggle to classify images according to this kind of abstract concept, but humans can do it almost instantly. Electrical signals within the brain fire before a person even realizes he's recognized an image as odd or unusual. 

Sajda's device, called C3Vision (cortically coupled computer vision), uses an electroencephalogram (EEG) cap to monitor brain activity as the person wearing it is shown about 10 images per second. Machine-learning algorithms trained to detect the neurological signals that signify interest in an image are used to analyze this brain activity. By monitoring these signals, the system rapidly ranks the images in terms of how interesting they appear to the viewer. The search is then refined by retrieving other images that are similar to those with the highest rank. "It's a search tool that allows you to find images that are very similar to those that have grabbed your attention," says Sajda.

At the speed at which it works, the conscious brain is unable to register a "hit." But the neurological visual pathways work much faster, says Sajda. The brain produces distinct electrical signals that can be detected and decoded by the 64 EEG electrodes within the cap. "It's on the edge of the subconscious," he says.

Most brain-computer interface research is focused on harnessing conscious processes, says Eric Leuthardt, director of the Center for Innovation in Neuroscience and Technology at Washington University School of Medicine. "Reading our brain signals and being able to distinguish 'interesting' from 'not interesting' prior to us having a conscious perception of seeing the item tells us that there is a substantial amount of processing that our brain does prior to the conscious awareness of the perception."

Andrew Blake, a computer vision expert and managing director of Microsoft Research Cambridge, in the U.K., says that "controlling machines directly from brain activity is a subject of intense research interest, but it is very difficult to obtain precise control, particularly without invasive methods." 

Sajda calls the approach "information triage" because it uses limited information from the brain to help refine an image search. "The key is, we don't show the whole database. We take a small sample and show it very rapidly," says Sajda. "From 10,000 images, we may show just 100 or so." 

This process can deliver any images that grab the subject's attention. "One of the cool things about the idea is, if you see something new you didn't expect, and it grabs your attention, then this will also get a relatively high score," he says. 

Sajda and colleagues at Columbia have founded a spinoff company called Neuromatters to commercialize the technology with $4.6 million in funding from the Defense Advanced Research Projects Agency. Along with military applications, Sajda says possible applications might include advanced gaming interfaces and neuro-marketing. "It could be used for getting demographic feedback on how much an advert grabs people's attention," he says. 

By Duncan Graham-Rowe
From Technology Review

0 comments:

Post a Comment