Google and Microsoft Talk Artificial Intelligence

Google and Microsoft don't share a stage often, being increasingly fierce competitors in areas such as Web search, mobile, and cloud computing. But the rivals can agree on some things—like the importance of artificial intelligence to the future of technology.

 Meeting of the minds: Peter Norvig (top) and Eric Horvitz agree that AI is a key to the future of technology.

Peter Norvig, Google's director of research, and Eric Horvitz, a distinguished scientist at Microsoft Research, recently spoke jointly to an audience at the Computer History Museum in Palo Alto, California, about the promise of AI. Afterward, the pair talked with Technology Review's IT editor, Tom Simonite, about what AI can do today, and what they think it'll be capable of tomorrow. Artificial intelligence is a complex subject, and some answers have been edited for brevity.

Technology Review: You both spoke on stage of how AI has been advanced in recent years through the use of machine-learning techniques that take in large volumes of data and figure out things like how to translate text or transcribe speech. What about the areas we want AI to help where there isn't lots of data to learn from?

Peter Norvig: What we're doing is like looking under the lamppost for your dropped keys because the light is there. We did really well with text and speech because there's lots of data in the wild. Parsing [breaking down the grammatical elements of sentences] never naturally occurs, perhaps in someone's linguistics homework, so we have to learn that without [labeled] data. One of my colleagues is trying to get around that by looking at which parts of online text have been made links—that can signal where a particular part of a sentence is.

Eric Horvitz: I've often thought that if you had a cloud service in the sky that recorded every speech request and what happened next—every conversation in every taxi in Beijing, for example—it could be possible to have AI learn how to do everything.
More seriously, if we can find ways to capture lots of data in a way that preserves privacy, we could make that possible.

Isn't it difficult to use machine learning if the training data isn't already labeled and explained, to give the AI a "truth" to get started from?

Horvitz: You don't need it to be completely labeled. An area known as semi-supervised learning is showing us that even if 1 percent or less of the data is tagged, you can use that to understand the rest.
But a lack of labels is a challenge. One solution is to actually pay people a small amount to help out a system with data it can't understand, by doing microtasks like labeling images or other small things. I think using human computation to augment AI is a really rich area.

Another possibility is to build systems that understand the value of information, meaning they can automatically compute what the next best question to ask is, or how to get the most value out of an additional tag or piece of information provided by a human.

Norvig: You don't have to tell a learning system everything. There's a type of learning called reinforcement learning where you just give a reward or punishment at the end of a task. For example, you lost a game of checkers and aren't told where you went wrong and have to learn what to do to get the reward next time. 

All this is very different from the early days of artificial intelligence, in the '50s and '60s, when researchers made bold predictions about matching human ability and tried to use high-level rules to create intelligence. Are your machine-learning systems working out those same high-level rules for themselves?

Horvitz: Learning systems can derive high-level situational rules for action, for example, to take a set of [physiological] symptoms and test results and spit out a diagnosis. But that isn't the same as general rules of intelligence.

It may be that the more low-level work we do today will meet the top-down ideas from the bottom up one day. The revolution that Peter and I were part of in AI was that decision making under uncertainty was so important and could be done with probabilistic approaches. Along with the probabilistic revolution in the AI comes perspective: we are very limited agents and incompleteness is inescapable.

Norvig: In the early days, there was logic that set artificial intelligence apart, and the question was how to use it. The study became the study of what these tools were good for, like chess. But you can then only have things that are true or false and you can't do a lot of things we want to do, so we went toward probability. It took the field a while to recognize those other fields, like probability and decision theory, were out there. Bringing those two approaches together is a challenge.

As we see more direct evidence of AI in real life, for example, Siri, it seems that a kind of design problem has been created. People creating AIs need to make them palatable to our own intelligence.

Norvig: That's actually a set of problems at various levels. We know the human vision system and what making buttons different colors might mean, for example. At a higher level, the expectations in our head of something and how it should behave are based on what we think it is and how we think of its relationship to us.

Horvitz: AI is intersecting more and more with the field of computer human interaction [studying the psychology of how we use and think about computers]. The idea that we will have more intelligent things that work closely with people really focuses attention on the need to develop new methods at the intersection of human intelligence and machine intelligence.

What do we need to know more about to make AIs more compatible with humans?
Horvitz: One thing my research group has been pushing to give computers is a systemwide understanding of human attention, to know when best to interrupt a person. It's been a topic of research between us researchers and the product teams.

Norvig: I think we also want to understand the human body a lot more, and you can see in Microsoft's Kinect a way to do that. There's lots of potential to have systems understand our behavior and body language.

Is there any AI in Kinect?
Horvitz: There's quite a lot of machine learning at the core of it. I think the idea that we can take leading-edge AI and develop a consumer device that sold faster than any other before in history says something about the field of AI. Machine learning also plays a central role in Bing search, and I can only presume is also important in Google's search offering. So, people searching the Web use AI in their daily lives.

One last question: Can you tell me one recent demo of AI technology that impressed you?

Norvig: I read a paper recently by someone at Google about to go back to Stanford about unsupervised learning, an area where the curves of our improvement over time have not looked so good. But he's getting some really good results, and it looks like learning when you don't know anything in advance could be about to get a lot better.

Horvitz: I've been very impressed by apprentice learning, where a system learns by example. It has lots of applications. Berkeley and Stanford both have groups really advancing that: for example, helicopters that learn to fly on their backs [upside-down] from [observing] a human expert.

By Tom Simonite
From Technology Review

0 comments:

Post a Comment