The Alignment Problem
The alignment
problem. A phrase Yuval Noah Harari uses in Nexus to describe the mismatch and thus the
problems created by today’s information systems. The Internet started off by
being free (content), but companies had to find a way to make money. They found
ads. But that created a second-order consequence – it became necessary to show
more ads, which meant it became key that users spent the maximum possible time
online. User “engagement” has thus become the mantra of the Internet, quality
(let alone truth) of content be damned. And outrage outsells tranquillity by a
mile.
Clausewitz, a
Prussian general, wrote a book called On War, in which he famously said
that:
“War
is the continuation of policy by other means.”
In his view, wars
should not be based on emotions or egos or even righteousness. Rather,
war should be used as a political tool and even then, only if it aligns
with some overarching political goal. (Indira Gandhi in Bangladesh is pure
Clausewitz; George W Bush in Iraq is the opposite of Clausewitz).
“History
is full of decisive military victories that led to political disasters.”
In Clausewitz’s
view, Napoleon too was the latter – his military victories were spectacular,
but there was no overarching political goal behind it all, which is why it (and
Napoleon) disintegrated as quickly as it rose. Harari agrees:
“Both
Napoleon and George W Bush fell victim to the alignment problem. Their
short-term military goals were misaligned with their countries’ long-term
geopolitical goals.”
Interesting, but
how is any of this connected to information, the topic of Harari’s book? Aha,
he says, the cause of today’s problems related to the Internet/social media are
rooted in the alignment problem. “Maximizing user engagement” is as shortsighted
a policy as “maximizing victory” without any underlying policy goal.
In his 2014 book
on the dangers of AI (well before AI took off), Nick Bostrom described
the now-famous “paperclip problem”. Say, an AI is told to maximize the
production of paperclips. If it went about trying to achieve that and exactly
that goal, he said it might well proceed as follows. It will dismantle anything
that can serve as raw material for the construction of more paperclips. It will
try and build more factories, and to that goal, if it needs to manipulate
policies and politicians, it will do that too. It may even decide to enter
space exploration to mine asteroids for raw materials for paperclips. The point
he was making was this was the danger of AI – set it a goal, and it may not
bother about implied limits (the world only needs so many paperclips) or
taken-for-granted aspects like follow the laws or don’t kill people.
“The
problem with computers isn’t that they are particularly evil but that they are
particularly powerful.”
The paperclip
problem sounded outlandish when proposed. Today, it feels prophetic in more and
more ways.
The problem, says
Harari, becomes worse because AI can come up with inconceivable solutions; it
will not necessarily follow the laws; it will not have any ethical limits – it
will only achieve/maximize the goal. Tell it to maximize user engagement and while
you can’t imagine the specifics, you will end with something similar to the
paperclip scenario.
AI’s are trained
on data sets. Any errors or biases in that data set, deliberate or accidental,
will be “learnt” by the AI as true. If historical data says men are better at
most jobs (because hey, women’s equality began only recently), AI’s can and do
conclude men are better, and can then perpetuate that by hiring more men!
Getting rid of such algorithmic biases is as hard as removing human biases.
Already we see signs that most people trust the AI’s. It is already becoming like religion – something trusted to get things right. Yet, it has been known to hallucinate and cook up “facts”. Sometimes we catch such occurrences, how many times are we missing them?
Comments
Post a Comment