ChatGPT, Hallucinations and Plugins

I was amused to find yet another thing ChatGPT can do. One guy asked the tool to identify the author of an article based on its first 4 paragraphs. ChatGPT correctly identified the author! (The author in question, Ben Thompson, publishes on the Net, so the dataset to compare against was large enough).

 

ChatGPT then explained how it had arrived at the answer. Read the entire explanation here (it’s quite short)– it sounds exactly the way a human would reason. Scary? Impressive? Both?

 

ChatGPT, like all AI’s, builds its own models, writes Ben Thompson. And like humans, its model can be wrong. But it can’t realize that. There’s even a technical term for it – hallucination:

“In artificial intelligence (AI), a hallucination or artificial hallucination (also occasionally called delusion) is a confident response by an AI that does not seem to be justified by its training data. For example, a hallucinating chatbot with no knowledge of Tesla’s revenue might internally pick a random number (such as “$13.6 billion”) that the chatbot deems plausible, and then go on to falsely and repeatedly insist that Tesla’s revenue is $13.6 billion, with no sign of internal awareness that the figure was a product of its own imagination.

 

Hallucinations are obviously a bad thing – if you are looking for correctness. Look at it differently though and a hallucination is a creation. Is creativity then an act less severe than outright hallucination, wonders Thompson… It reminded me of that wonderful Calvin and Hobbes strip that said the same thing.

 

Nowadays there are even plugins (add-on’s) for ChatGPT. Some may be maths focussed (equation solving, graphs etc); others have interesting commercial uses. For example, say you ask ChatGPT for a recipe. With the plugin installed, at the end of the answer (recipe + ingredients), it will ask if you want to buy all the ingredients? If you say Yes, the plugin will add the items to your shopping cart with the affiliated site.

 

Even if one feels that the danger of an AI going rogue is far-fetched, some wonder if the intention of the human using the AI can now lead to highly negative outcomes. As Jeff Hawkins wrote, the AI’s model is like the map of a place. That map though can be used by a trader for one purpose and a military general for a very different purpose.

Comments

Popular posts from this blog

Student of the Year

The Retort of the "Luxury Person"

Animal Senses #7: Touch and Remote Touch