What Watson Can Learn From the Human Brain

Watson won. That set of microchips will soon join the pantheon of machines that have defeated humans, from the steam-powered hammer that killed John Henry to the Deep Blue supercomputer that battled Kasparov. Predictably enough, the victory inspired a chorus of “computer overlord” anxieties, as people used the victory of microchips to proclaim the decline of the human mind, or at least the coming of the singularity.

Personally, I was a little turned off by the whole event — it felt like a big marketing campaign for IBM and Jeopardy. Nevertheless, I think the real moral of Watson is that our brain, even though it lost the game, is a pretty stunning piece of meaty machinery. Although we always use the latest gadget as a metaphor for the black box of the mind — our nerves were like telegraphs before they were like telephone exchanges before they were like computers — the reality is that our inventions are pretty paltry substitutes. Natural selection has nothing to worry about.

Let’s begin with energy efficiency. One of the most remarkable facts about the human brain is that it requires less energy (12 watts) than a light bulb. In other words, that loom of a trillion synapses, exchanging ions and neurotransmitters, costs less to run than a little incandescence. Compare that to Deep Blue: when the machine was operating at full speed, it was a fire hazard, and required specialized heat-dissipating equipment to keep it cool. Meanwhile, Kasparov barely broke a sweat.

The same lesson applies to Watson. I couldn’t find reliable information on its off-site energy consumption, but suffice to say it required many tens of thousands of times as much energy as all the human brains on stage combined. While this might not seem like a big deal, evolution long ago realized that we live in a world of scarce resources. Evolution was right. As computers become omnipresent in our lives — I’ve got one dissipating heat in my pocket right now — we’re going to need to figure out how to make them more efficient. Fortunately, we’ve got an ideal prototype locked inside our skull.

The second thing Watson illustrates is the power of metaknowledge, or the ability to reflect on what we know. As Vaughan Bell pointed out a few months ago, this is Watson’s real innovation:

Answering this question needs pre-existing knowledge and, computationally, two main approaches. One is constraint satisfaction, which finds which answer is the ‘best fit’ to a problem which doesn’t have mathematically exact solution; and the other is a local search algorithm, which indicates when further searching is unlikely to yield a better result – in other words, when to quit computing and give an answer – because you can always crunch more data.

Our brain comes preprogrammed with metaknowledge: We don’t just know things — we know we know them, which leads to feelings of knowing. I’ve written about this before, but one of my favorite examples of such feelings is when a word is on the tip of the tongue. Perhaps it occurs when you run into an old acquaintance whose name you can’t remember, although you know that it begins with the letter J. Or perhaps you struggle to recall the title of a recent movie, even though you can describe the plot in perfect detail.

What’s interesting about this mental hiccup is that, even though the mind can’t remember the information, it’s convinced that it knows it. We have a vague feeling that, if we continue to search for the missing word, we’ll be able to find it. (This is a universal experience: The vast majority of languages, from Afrikaans to Hindi to Arabic, even rely on tongue metaphors to describe the tip-of-the-tongue moment.) But here’s the mystery: If we’ve forgotten a person’s name, then why are we so convinced that we remember it? What does it mean to know something without being able to access it?

This is where feelings of knowing prove essential. The feeling is a signal that we can find the answer, if only we keep on thinking about the question. And these feelings aren’t just relevant when we can’t remember someone’s name. Think, for instance, about the last time you raised your hand to speak in a group setting: Did you know exactly what you were going to say when you decided to open your mouth? Probably not. Instead, you had a funny hunch that you had something worthwhile to say, and so you began talking without knowing how the sentence would end. Likewise, those players on Jeopardy are able to

ring the buzzer before they can actually articulate the answer. All they have is a feeling, and that feeling is enough.

These feelings of knowing illustrate the power of our emotions. The first thing to note is that these feelings are often extremely accurate. The Columbia University psychologist Janet Metcalfe, for instance, has demonstrated that when it comes to trivia questions, our feelings of knowing predict our actual knowledge. Think, for a moment, about how impressive this is: the metacognitive brain is able to almost instantly make an assessment about all the facts, errata and detritus stuffed into the cortex. The end result is an epistemic intuition, which tells us whether or not we should press the buzzer. Watson won, at least in part, because it was a fraction of a second faster with its hunches. It didn’t know more. It just knew what it knew first.

I certainly don’t mean to take away from the achievements of those IBM engineers. Watson is an amazing machine. Nevertheless, I think the real lesson of the victorious Watson is that we have much to learn from the software and hardware running in our head. If we’re going to live in a world saturated with machines, then those machines had better learn from biology. As natural selection learned long ago, computational power without efficiency is an unsustainable strategy.

P.S. I really enjoyed Stephen Baker’s Final Jeopardy, if you’d like to learn more about the struggle to create Watson.

By Jonah Lehrer 
From  wired.com


Post a Comment