Machine Intelligence and Mind Games


Former chess champ, Garry Kasparov, wrote this very interesting book titled Deep Thinking. Once a upon a time, chess was the holy grail for machine intelligence. Claude Shannon, founder of information theory, explained why. Chess is a sharply defined game, its objective is clear, and the last reason is best explained in Kasparov’s words:
“Since chess requires thinking, either a chess playing machine thinks or thinking doesn’t mean what we believe it to mean.”

There was also the hope that training a computer’s guns at chess may lead to other learnings, things far deeper than chess. Unfortunately though:
“Chess just wasn’t deep enough to force the chess-machine community to find a solution beyond speed… Patterns, knowledge, and other humanlike methods were discarded as the super-fast brute force machines took home all the trophies.”

Which is why Google’s AI, AlphaGo, that beat the human world champion at the Chinese game Go, a game “too big of a matrix to crack by brute force, too subtle to be decided by the tactical blunders that define human losses to computers at chess”, was “more interesting as an AI project”. Indeed, as I wrote earlier.

In an age where Google can find you images, we take machine intelligence for granted. But as Kasparov writes, that path was long, hard and at times amusing. Like the time they fed this machine hundreds of thousands of positions from (chess) grandmasters’ games to teach it. Here’s what happened when it played a real game:
“(It) launched an attack, and immediately sacrificed its queen! It lost in just a few moves, having given up the queen for next to nothing. Why did it do this? Well, when a grandmaster sacrifices his queen, it’s nearly always a brilliant and decisive blow. To the machine, educated on a diet of GM games, giving up its queen was clearly the key to success!”
Correlation ain’t causation.

The other area where the book is very interesting is Kasparov’s description of the mind games with the computers, Deep Thought and later Deep Blue:
“As a believer in chess as a form of psychological, not just intellectual, warfare, playing against something with no psyche was troubling.”
He switched from his usual style of play to create new scenarios, thereby depriving the computer of the option to dig into its database of older games. He tried to strangle it by creating situations that required analysis of moves deeper than what a computer’s memory could perform.

Ironically, the IBM team did similar things. If Kasparov played an anticipate move in certain scenarios, they programmed the computer to reply instantly. Why?
“This has a psychological impact, as the machine becomes unpredictable, which was our main goal.”

And Kasparov made an assumption: while a computer may make mistakes or miss things in the distant future of a game, it would never miss anything in the near future of a game. Therefore, he reasoned:
“If it is allowing you to play a winning tactic, it’s probably not winning at all.”
That sounds very reasonable, right? Except, that mode of thinking is why Kasparov resigned in a game that could have been drawn in a handful of moves. Why? Kasparov couldn’t imagine that a computer would have missed a way to draw just a few moves down the game!

In another game, Kasparov was convinced that IBM was cheating, that humans were intervening behind the scenes. He even said as much in an interview, saying the computer’s move felt like the “hand of God” goal of Maradona. Worse, his demand that the computer’s logs, a record of what it was “thinking” at each step, be made available to the judge, was turned down by IBM. So was this proof that IBM was cheating? Or, as Kasparov himself wonders, was the refusal just a part of IBM’s psychological war, a way to let Kasparov get all wound up with conspiracy theories and thereby not play his best?
“If they can get you asking the wrong questions, they don’t have to worry about the answers.”

I never knew the man v/s machine chess games had involved psychological warfare… from both sides!

Comments

  1. Yes, yes. I hope the following idea is relevant in this discussion: Such things compel us to "keep on pondering about what it can all be, or, where they will lead". At this juncture, when humans are trying to venture deep into cybernetics (I think this means machine intelligence), there will be fears and apprehensions. Some people will plunge into paranoia too!

    I think what has been achieved in machine intelligence, robotics and such other things, seem more slanted towards 'pros' more, rather than 'cons'. I can give such examples as successful robotic surgery, which reduces the complications. I came across this when my wife's brother underwent that and is recovering.

    One other thing, I came across, which too I would like to mention. While at Coimbatore, I was happy to find that my friend was making calls while we were moving in the car. All he did was to give verbal commands, after doing just one simple activation by hand. His commands were responded through appropriate responses/prompts by a robotic voice. The person he named would soon get connected to him and he would speak, as if he was speaking to someone in the car. Fortunately, it looked far less hazardous than people doing mindless mobile-talks while driving. I reason thus: what my friend did was not different from speaking to fellow passengers while driving, a little bit of which is not harmful or any danger-stuff.

    Soon we are going to find that most IT things will be using voice interactions, not hand-pressing something! Most applications will also prove extremely beneficial by offer greater efficiency and optimization.

    What I am driving at is this: Like many things in the past, fearing advancements may not lead us anywhere. And, the probability of newer things may only amount to change of ways - not calamitous outcomes.

    ReplyDelete

Post a Comment

Popular posts from this blog

Student of the Year

Animal Senses #7: Touch and Remote Touch

The Retort of the "Luxury Person"