AI Risk #1 - Real or Imagined?

I started writing this series of blogs based on the surge in what AI’s can do, the most famous example of which is ChatGPT. I was basing the blogs on Nick Bostrom’s Superintelligence, a book I read back in 2013. Several things that felt far-fetched then seem increasingly a possibility to be considered seriously. (Btw, Geoffrey Hinton, one of the AI “godfathers”, quit Google recently saying some of the chatbots are “quite scary”).

 

Let me start with what Bostrom meant by the term “superintelligence”. He starts with a warning on that term “intelligence”. He points out that our (human) tendency to think of intelligence in human-centric terms is dangerous when it comes to AI. He quotes another AI specialist on the topic:

“The human tendency to think of “village idiot” and “Einstein” as the extreme ends of the intelligence scale, instead of indistinguishable points on the scale of minds-in-general.”

We say dolphins or chimpanzees are smart, but don’t distinguish individual specimen of those species. Similarly, the difference between the village idiot and Einstein on any absolute scale of intelligence would be negligible, he warns.

 

Superintelligence, Bostrom says, could be based on just speed (the AI can do what a human can do, just faster. But orders of magnitude faster). Or it could be collective, i.e., different AI systems would be good in different areas, but collectively they outstrip anything seen so far (This is like imagining the collective intelligence of scientists, doctors and other professions – each specialized in some areas only but collectively see how mankind is today). A third axis could be qualitative – I can think of quantum mechanics as an example, i.e., a totally different way of understanding things.

 

Bostrom warns that an AI that gets “weakly” superintelligent could use existing resources to augment its capability. By using excess hardware for its own purposes – I can imagine how hijacking cloud computing resources could qualify under this heading. And it could use, he said, all the knowledge available in electronic format – the Internet, in other words.

 

All of the above, he says, can create a positive feedback loop for the AI to improve itself – the improvement would improve itself further thereby setting off a chain reaction to superintelligence. And all this could happen at unimaginable speed.

Comments

Popular posts from this blog

Student of the Year

The Retort of the "Luxury Person"

Animal Senses #7: Touch and Remote Touch