Inductive Learning
For a very long
time, Artificial Intelligence (AI) seemed no match for the supposedly “easy”
tasks. “Easy” as in the ones that even small children can do, like lifting
blocks and stacking them to form a tower. Or identifying objects in a pic. The
problem wasn’t just about processing power or good enough sensors, because
those were getting better and cheaper all the time. So what then was the
problem?
But before I get
to that question, remember “machine learning”? Over-simplified, it is the most
popular way machines learn:
1)
Load
just a very few, very high-level rules into the computer;
2)
Throw a
whole lot of data at the computer to interpret on its own;
3)
Let the
computer create its own rules, include
changing existing ones;
4)
Repeat
steps 2 and 3.
Step #3 is why it
is called “machine learning”. It is also the attempt at AI that has yielded the
most results from voice assistants on smartphones to tagging photos by their
content.
Was the problem
that we don’t (consciously) know how we do a lot of mundane things, so we
couldn’t program the steps into a machine? A
la “The Puzzled Centipede”:
A centipede was happy quite,
Until a frog in fun,
Said, “Pray tell which leg comes after which?”
This raised her mind to such a pitch,
She lay distracted in a ditch,
Not knowing how to run.
Is that why
machine learning worked? Because we didn’t try to teach the computer how to
walk and just let it figure it out on its own by throwing data at it? Isn’t
that the same way a child learns the grammar of a language, not by giving it
specific rules (which we don’t know anyway), but by trial and error and
observations?
Kathryn Schulz
wrote in her book, Being
Wrong:
“This strategy of guessing based on past
experience is known as inductive reasoning.”
But it comes with
its risk:
“It means that our beliefs are not
necessarily true. Instead, they are probabilistically true… You make best guesses based on your
cumulative exposure to the evidence every day.”
I guess this
explains why Google’s photo recognition software got so good (it followed a
very human-like inductive reasoning). Unfortunately, it also led to the
misidentification of pics of black people as those of gorillas. Does that mean
the biases of humans are based on a flaw in the way we learn?
Comments
Post a Comment