Mistakes and Intelligence

At a time when computers were still as large as rooms, worked on vacuum tubes that would notoriously conk out all the time, when the concept of software programs was still in its infancy (you had to re-wire the machine physically for every new activity), Alan Turing was already thinking way ahead. Could machines become intelligent, he wondered, in 1947.

 

He pointed out that humans, in logical fields like maths, make mistakes. But with machines, we have zero tolerance for errors. If we continued with that approach, then machines can never become intelligent, he said in a lecture:

“If a machine is expected to be infallible, it cannot also be intelligent.”

 

Another mathematical genius, John von Neumann, shared Turing’s view. By 1952, the idea of software (instructions given to a computer to do different things, without needing to re-wire the machine) was not all that uncommon. With humans, he said, we accept that making mistakes (and learning from them) is how we gain in intelligence. But if our software involves precise instructions, then how could a machine ever err on its own and learn? He went on to add that we can’t solve this by deliberately introducing errors in the software:

“Mistakes need to be more fundamental – part of the physics rather than part of the programming.”

 

Boy, were Turing and von Neumann, spot on. Today’s AI, be it Alexa’s voice recognition, or your phone’s ability to unlock itself by recognizing your face, or the best chess playing systems, all have the points they made: (1) They make mistakes, learn from them, and keep getting “smarter”, and (2) They make their own mistakes, i.e., the error isn’t in the software; rather, the software is just high-level rules and it was left to the AI to derive everything from first principles.

 

Most people still don’t understand these points. The AI is getting smarter because it is allowed to make errors, and the mistakes it makes are due to its own wrong “inferences”. Like the time Google’s photo recognition algorithms misidentified blacks as gorillas. Many people still think of AI as precise instructions given to the computer with no room for ambiguity, nuance, or open-endedness. This misunderstanding is also why many are terrified every time a self-driven car makes a mistake. Aren’t computers supposed to be infallible? And if it made an error, how can it be let loose in the real world?

 

Those questions and concerns make perfect sense from a safety perspective. But the self-driven car is AI-based. And as Turing and van Neumann said decades back: if it’s intelligent, it can’t be error-free. Perhaps it’s time we amended that old saying to “To err is human and AI”…

Comments

Popular posts from this blog

Student of the Year

Animal Senses #7: Touch and Remote Touch

The Retort of the "Luxury Person"