Don't Shoot the Algorithm!
As algorithms make
more and more decisions, any accusation of bias or an outright error in the
decision gets redirected at the algorithm. Seth Godin cites a few
such examples:
-
That
important mail that landed in the spam folder;
-
Who
gets stopped at airports for extensive checks;
-
Google’s
search results;
-
Facebook’s
news feed.
As Godin says,
this begins to sound more and more like “hiding behind the algorithm”. But why blame
the algorithm “as if it wrote itself”? Isn’t it obvious, asks Godin, that “someone
wrote that code”?
What happens next
when AI (an algorithm) begins to write new algorithms?!
“As AI gets ever better at machine
learning, we'll hear over and over that the output isn't anyone's responsibility,
because, hey, the algorithm wrote the code.”
While there are
obvious dangers with algorithms making decisions, isn’t it also true that
algorithms would make decisions that humans won’t make due to irrational and/or
moral reasons like political correctness? Do you really want the next plane you
travel in to be blown up because of political correctness? Or would you rather
that people who match the profile of terrorists be vetted thoroughly?
On many topics,
I’d pick an algorithm over the decisions of the likes of Angela Merkel. That,
of course, leads to the next questions which are very hard to answer: which are
those topics? And even more important, who decides the list of those topics?
Or can we write an
algorithm for that too?!
Comments
Post a Comment