Computers and Predictions

In his wonderful book on the topic of predictions, The Signal and the Noise, Nate Silver has one chapter on the match between then world champion Garry Kasparov and the computer program Deep Blue.

Since it is not practical to analyze every conceivable move to endless depths (even for computers), both humans and computers break down the ultimate objective into shorter term tactics, like winning a pawn here or a piece there. Kasparov tried to exploit that tendency by “baiting it into mindlessly pursuing plans that did not improve its strategic position”:
“A classic example of the computer’s biases is its willingness to accept sacrifices (trade a better piece for a weaker one).”
In the first game, Deep Blue took the bait via just such a sacrifice. While neither Kasparov nor the computer could calculate all the moves from there onwards, Kasparov knew from experience that, in such positions, “with such pressure the odds are heavily in his favor”. Kasparov was right: he went on to win that game and remarked:
“Typical computer weaknesses. I’m sure it was very pleased with the position, but the consequences were too deep for it to judge the position correctly.”

But, even though Kasparov won that game, something had happened that might have changed the course of the rest of the games in the series. In a “desperate, but not quite hopeless” position, Deep Blue had made a move that made no sense. Even more strangely, the computer resigned after the very next move! That move messed up Kasparov’s mind:
“What had the computer been thinking?... He was used to see Deep Blue commit strategic blunders… but this was something different: a tactical error in a relatively simple position – exactly the sort of mistake that computers don’t make.”
A paranoid Kasparov went over that move again and again with his support staff. They considered many possibilities:
1)      Did Deep Blue commit suicide in a lost position rather than drag it on and reveal “any more of how it played”?
2)     Was it an “elaborate hustle”? By deliberately losing the first game, did Deep Blue want to make Kasparov over-confident?
Not satisfied with these answers, he pored over the “recommended move” in that position, the one that Deep Blue did not make. It would have led to checkmate in a little over 20 moves. So, reasoned Kasparov, if Deep Blue chose to make another move, “it had found another one that would take him longer”. This was a terrifying prospect: How many moves ahead could the computer analyze? Did the “inexplicable blunder” in fact reveal “great wisdom”?

In Game 2 of the series, in a marginally advantageous position, Deep Blue skipped a move that all grandmasters recommended and instead played a different move. Why, wondered Kasparov:
“Unless his suspicion was correct – Deep Blue was capable of foreseeing twenty or more moves down the road.”
A few moves later, Kasparov had no chance of winning. But he might still be able to draw. Yet, he resigned:
“The computer can’t have miscalculated, he thought, not when it could think twenty moves ahead.”
Kasparov’s support staff was aghast: there was a way to draw with perpetual checks just 7 moves later. When he was told this, Kasparov was stunned:
“I was so impressed by the deep positional play of the computer that I didn’t think there was any escape.”
It was an “embarrassing, unprecedented mistake”. Deep Blue had gotten well and truly inside Kasparov’s head. He would go on to lose the series, a first for a computer against the human world champion. Ironically, it turned out that the move in the first game that set off the whole chain that messed up Kasparov completely was a software bug!

But what has any of this got to do with predictions, the topic of Silver’s book? Silver was using this example to bring out the danger of relying too much on computers to make predictions in areas like weather forecasting. Sure, they won’t make mistakes, but if the code has a bug, the prediction would be wrong. Garbage in, garbage out.

Even the great Kasparov missed that possibility; but that’s something we should remember when we rely on computers to make predictions.

Comments

  1. I had heard you mention about the bug in the Deep Blue which rattled Kasparov. But I didn't know then the implications that you brought out this time, in this blog.

    They say chess has an enormous provision to "psychological aggression" against the opponent, which has nothing to do with the intrinsic chess game and the contemplated moves. They said that repeated movement by way of taking walks on the floor by Kasparov was a tactic of that kind and was objected to by the opponent. They said that when Bobby Fisher won the fourth game decisively in that title series [Fisher having lost the first, drawn the second and walked out (possibly never came to play) on the third hence deemed lost], the effect on Spasky was unexpected. The series never got around to playing all the games - Fisher had won hands down long before anywhere close to the number games that could be played. They say Spasky was in no condition to resume tournament chess any more in his life. He was a shattered person they said. I don't know if this is all true.

    Now that you mention this episode of psychological warfare in man versus machine, even if it was all whipped up by Kasparov, it looks like human beings will have this disadvantage against machines.

    Would this have anything to suggest what robots can do to us in the future world? Scientists claim they are trying incorporate emotions into robots. Would that also mean future robots would be visiting future psychiatry-counselor-robots to keep their balance of mind? There is no telling about anything anymore! :-)

    ReplyDelete

Post a Comment

Popular posts from this blog

Student of the Year

The Retort of the "Luxury Person"

Animal Senses #7: Touch and Remote Touch