Can the Chinese Room Think?
In 1980, a philosopher named John Searle came up with a famous thought experiment called the Chinese Room experiment. It goes like this:
“Searle
imagines himself alone in a room following a computer program for responding to
Chinese characters slipped under the door. Searle understands nothing of
Chinese, and yet, by following the program for manipulating symbols and
numerals just as a computer does, he sends appropriate strings of Chinese
characters back out under the door, and this leads those outside to mistakenly
suppose there is a Chinese speaker in the room.”
The narrow
conclusion was that a digital computer could only appear to understand
language, but it could never truly understand it. The broader conclusion was
that the thought experiment disproved the idea that the mind was just a
computing/analytical machine. Its implications thus fell into semantics,
language, mind, consciousness, computer science and cognitive science.
Does Google’s Pathways Language Model (PaLM) prove that the Chinese Room can think? But
first, what is Pathways?
“We’d
like to train one (AI) model that can not only handle many separate tasks, but
also draw upon and combine its existing skills to learn new tasks faster and
more effectively. That way what a model learns by training on one task –
say, learning how aerial images can predict the elevation of a landscape –
could help it learn another task — say, predicting how flood waters will flow
through that terrain.”
Here’s one example
of the level of “understanding” PaLM can do.
Question:
Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3
tennis balls. How many tennis balls does he have now?
Answer:
The answer is 11.
But wait, PaLM can
even “explain” how it arrived at the answer.
Input:
Jennifer looked out her window and sees a really cool cloud below her. She
unbuckles her seatbelt and heads to the bathroom. Is Jennifer probably
traveling more than 300 miles per hour relative to the earth?
Model Output:
300 miles per hour is about 480 km/h. This is about the speed of a commercial
airplane. Clouds are usually below airplanes, so Jennifer is probably on an
airplane.
The
answer is “yes”.”
Many such examples
certainly suggest that the computer is reasoning. All of which is why
Alex Tabarrok writes that:
“Searle’s thought experiment was first posed at a time when the output from AI looked stilted, limited, mechanical. It was easy to imagine that there was a difference in kind. Now the output from AI looks fluid, general, human. It’s harder to imagine there is a difference in kind. The sheer ability of AI to reason, counter-balances our initial intuition, bias and hubris.”
Comments
Post a Comment