AI Risk #2 - Overtaking Humans

With the spurt in what AI can do, I re-read Nick Bostrom’s Superintelligence. In the intro, he had written:

“The control problem – the problem of how to control what the superintelligence (aka AI) would do – is quite difficult.”

Even more ominously:

“It also looks like we will only get one chance. Once unfriendly superintelligence exists, it will prevent us from replacing it or changing its preferences.”

 

Some feel this is an excessively pessimistic view. Won’t an AI rise be gradual, allowing us time to formulate and tweak our response? Not necessarily, argues Bostrom. Why not?

“The (AI) train might not pause or even decelerate at Humanville Station. It is likely to swoosh right by.”

In other words, AI might just explode, growing exponentially. It would hit human level abruptly and then continue on its upward trajectory at that same exponential pace, leaving us no time to react.

 

Now keep in mind Bostrom wrote his book back (it feels a lifetime ago) in 2013. Yet, even then, he was able to correctly anticipate many things.

 

There is a field of probability called Bayesian theory – simply put, it tells us how much we should adjust our belief system when we get some new info. By how much should the new data strengthen our belief? Or conversely, by how much should our confidence in our belief decrease? This is at the heart of how AI’s “learn” – they consume more and more data and use it to adjust their “understanding” of things. Bostrom correctly anticipated that improvements to Bayesian algorithms would “yield immediate improvements across many different areas”. I feel this is why we find AI’s got so good at language (ChatGPT) as well as art almost simultaneously.

 

Worryingly, he wrote:

“If somebody were to succeed in creating an AI that could understand natural language as well as a human adult, they would in all likelihood also either have succeeded in creating an AI that could do everything else that human intelligence can do, or they would be a very short step from such a general capability.”

With ChatGPT, the barrier called language has been breached.

 

When would an AI be deemed to exceed human intelligence? Bostrom’s take:

  • It would exceed cognitive performance of human in virtually all domains of interest;
  • It would be able to “learn” as it goes along;
  • It would know to deal with uncertainty and probabilistic knowledge;
  • It would be able to extract useful concepts using data from its sensors + its internal states;
  • It would know how to improve its own architecture.

Today, AI has checked all of the above, except the last point – its architecture is decided by us humans, both in hardware and software. But with AI starting to write software already, how long will it be before it can write software better than us and start to design hardware ideal for its progress?

 

Which brings us back to the urgency of Bostrom’s “control problem” from the top of this blog – and the ominous point he made that we might have only one shot at creating a control system for the AI…

Comments

Popular posts from this blog

Student of the Year

The Retort of the "Luxury Person"

Animal Senses #7: Touch and Remote Touch