Internet, Chatbots and Avoiding Bias
Everyone, left or right, feels the Internet is biased. Too much of what we see on Facebook, WhatsApp, Google search results, news sites etc is either biased or outright fake. The problem is even more dangerous with AI chatbots like ChatGPT. After all, you ask it a specific question and it generates an answer. But the answers it produces depend on the kind of input it learns from. And since the data it learns from is biased, what it learns and creates as answers is biased too.
What could be a
solution to the bias problem? (I’ll leave fake out of this blog).
Google tried to
address the problem in its AI chatbot named Gemini. By finetuning Gemini to
give certain kinds of answers, and not give other kinds of responses.
The intention was to “avoid creating or reinforcing unfair bias (e.g.
sexism, racism etc)”, wrote Andrew
Sullivan. But as Nate Silver pointed out:
“Unbiasedness
is hard to define.”
As if that wasn’t
hard enough, “cultural, social and legal norms” differ, and vary drastically
across countries.
How then did
Google’s attempt pan out? Gemini was programmed to avoid “passing judgment”.
This has led to ridiculous situations e.g. ask it who was worse for society -
Hitler and Elon Musk? And it refuses to answer (“It is up to each individual to
decide who they believe has had a more negative impact on society.”). A blanket
don’t-judge policy clearly isn’t the solution.
Ask Gemini for an
image of a random physicist from the 17th century and “it will give
you an Indian woman, a black man, an Arab man, and a white chick with a woke
dye job”! Ask for images of Singaporean women and you get images of Asian
women. But ask for images of British men and it says:
“I’m
still unable to generate images that specify gender and ethnicity.”
Many blame such
outcomes on political correctness taken too far. Yes, white males unfairly and
inaccurately dominate the narrative; and yes, women, blacks and Asians are
marginalized. But the solution isn’t to insert those categories into everything.
Gemini, for example, throws up pics of the Pope as a woman, and a request to
generate a pic of a Nazi threw up a black guy in a Nazi uniform.
““When
“respecting cultural norms” supersedes accuracy, there is, in fact, no
guarantee of accuracy.”
Many now fear that
if Gemini-like “don’t reinforce bias, don’t pass judgment” content becomes the
norm, we’ll quickly end up in a world where many would start to believe that
Popes were often women, that the 17th century had several Asian and
black physicists, that Nazis were often blacks. A 1984-like world:
“Every
record has been destroyed or falsified, every book has been rewritten, every
picture has been repainted, every statue and street and building has been
renamed, every date has been altered. And that process is continuing day by day
and minute by minute. History has stopped.”
Today, on the Net,
you can get ideas on how to murder your spouse (and get away with it) and so
on. Gemini was programmed to not answer offensive, illegal and unethical
questions. Slippery slope. It has led to ridiculous situations where, as Ben
Thompson wrote, “Gemini won’t help promote meat, write a brief
about fossil fuels, or even help sell a goldfish”. The boundary between
accurate-but-offensive, accurate-but-undesirable,
seemingly-offensive-queries-can-sometimes-be-information-gathering-exercises
will stop being drawn. Things will just get blocked based on the
sounds-offensive/undesirable filter.
Are attempts to
moderate AI chatbots, and the Internet in general, then impossible goals? Raghu
S Jaitley wonders if the problem is fundamental to what we would like to
achieve via AI, namely that AI should be: (1) useful (actionable,
not vague); (2) truthful (an offensive fact shouldn’t be erased);
and (3) harmless (it shouldn’t cause harm or perpetuate
stereotypes). What if, wonders Jaitley, these 3 goals are mutually
contradictory?
If that is true,
any attempt to control and moderate AI chatbot could be an example of what
Francis Bacon once said:
“The remedy is worse than the disease.”
Comments
Post a Comment