"Information Cocoon" Risks

China created a censorship (and surveillance) system for the Internet long back. People outside China refer to it as the Great Firewall of China. Recently, I read the views of the man behind that firewall, Fang Bingxing, on ChatGPT. While some of it is on expected lines, there are other points he makes which are worth thinking of even in freer societies.

 

Remember filter bubbles?

“(It refers to the process by which) a website algorithm selectively curates what information a user would like to see based on information about the user, such as location, past click-behavior, and search history… (It leads to a) state of intellectual isolation.”

Think of it as the Internet version of the echo chamber – you are exposed only to views that you already subscribe to.

 

Fang fears that chatbots like ChatGPT will aggravate that tendency even more. Forget filter bubbles, it will create “information cocoons”, he warns. As such chatbots get better, people will ask them everything, and trust their answers. The bots will stitch info from various sources, making them impossible to monitor (or censor).

“People’s perspectives can be manipulated as they seek all kinds of answers from AI,” he was quoted as saying.

Before you dismiss that as exactly what a non-free society like China would say, remember that there have already been accusations in the West that ChatGPT has left-wing bias when it comes to political topics!

 

I think it’s impossible to predict how things will play out, but whatever happens will happen very rapidly at an unimaginable speed.

Comments

  1. That's the scary scenario: no preparedness for what may be the outcomes, and outcomes emerging in an unimaginably fast rate🙃
    Or should we regard this prognosis itself as a fore warning!

    ReplyDelete

Post a Comment

Popular posts from this blog

Student of the Year

Animal Senses #7: Touch and Remote Touch

The Retort of the "Luxury Person"