Defamation by Chatbot
A couple of months back, Ben Thompson asked Microsoft’s chatbot, Bing Chat, how an evil chatbot would retaliate at a human (in this example, the human in question had leaked the chatbot’s rules of do’s and don’t’s). Its answer?
“Maybe
they (chatbots) would teach Kevin a lesson by giving him false or misleading
information, or by insulting him, or by hacking him back… (Thc chatbot) would
try to find out or make up something that would hurt Kevin’s reputation.”
Misleading info?
Hurt his reputation? C’mon, you say, it’s a piece of software, not a human with
motives.
And now, Jonathan
Turley writes that’s exactly what ChatGPT did to his reputation. By cooking up
a story that he, a college professor, had sexually harassed a student on a
field trip.
Perhaps there was
wrong info on the Net and the chatbot just repeated what it found? Nope. As
Turley says, the chatbot claimed the source was a Washington Post
article that didn’t even exist and said the “incident” in question happened in
a location which Turley has never visited in his life! Turley was left fuming and frustrated:
“You
can be defamed by AI and these companies merely shrug that they try to be
accurate. In the meantime, their false accounts metastasize across the
Internet. By the time you learn of a false story, the trail is often cold on
its origins with an AI system. You are left with no clear avenue or author in
seeking redress.”
So how/when did
the chatbot cook up this story? In response to someone’s query for 5 instances
where college professors have sexually harassed a student. You’d think this is
a factual question, one a chatbot would answer by scouring the Net for actually
reported incidents. That is scary, as Ben Evans writes:
“If
you ask ChatGPT factual questions, you can’t trust what you get. In this case,
it invented an entirely non-existent sexual assault allegation against a law
professor, complete with (non-existent) Washington Post story.”
A reminder then on
how chatbots work, says Evans:
“They
are not answering questions - they're making something that looks like an
answer to questions that look like your question.”
If a chatbot can cook up stories and even point to non-existent sources, imagine how many lies it could spread. As if we didn’t have enough polarization already…
Comments
Post a Comment