AI – Risk of Extinction

Listen to Steve read this post (8 min audio) – let me know if you listened instead of reading this!

Imagine for a minute you were given an investment opportunity which could be life-changing, enough to make you better off than you are now, maybe even seriously rich. But the price of entry was that you had to put every single asset you owned into it and there was a small chance it would go to absolute zero — even a 5-10 per cent probability that this would occur.

Would you do it?

I wouldn’t consider it, even for a second. Yet, these are the risks we are being asked to take with our live Generative AI experiment, at least according to some AI experts.

p(Doom)

If you think P(Doom) sounds worrying, trust your instincts. P(Doom) is the term AI researchers are using to describe the probability that Artificial Super Intelligence will emerge and become an existential risk for humanity. (What a wonderful little moniker.)

Here is the crazy thing. Many learned AI researchers now have this probability sitting as high as 20-50 per cent. Kinda scary. Even a 5 per cent chance to wipe out our species is a worry… and there is almost no one who has a ratio smaller than this. Sam Altman, the CEO of Open AI, who created ChatGPT, has publicly said the risk is real and he has a P(Doom) at around 5 per cent.

It’s at this point that we must remember that we are not talking about something bad happening here, like, say, a recession, or a war, a hurricane, or even a pandemic. All of these we’ve faced before and, with a lot of pain, have overcome, collectively. We are talking about the end of humanity – it doesn’t get any heavier than that.

  • Reply to this email and tell me what if AI worries you.

Says Who?

Some of those with a P(Doom) at worryingly high levels are not fear-mongering crackpots, but those deeply involved in giving birth to AI. Here are some of the worried AI researchers and their P(Doom) percentages.

  • Michael Tontchev, a former Microsoft software developer and current senior software lead at Meta has his at 20 per cent.
  • Paul Christiano, a former OpenAI researcher who also holds a Ph.D. in theoretical computer science from UC Berkeley has his at 50 per cent.
  • Eliezer Yudkowsky, a renowned AI researcher and decision theorist has his at 50 per cent.
  • Geoffrey Hinton, known as the godfather of AI and a recent Google employee has his at 50 per cent.

Cold War 2.0 & AI Bunkers

As a keynote speaker on the future and AI, the most common question i get asked is if we will face a ‘Terminator Moment’. And just like at the height of the Cold War, those with their fingers on the button seem to be the only ones with a bunker they can run to if things go wrong.

In 2016, Altman said in an interview he was prepping for survival in the event of catastrophe such as a rogue AI, claiming to have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defence Force, and a big patch of land in Big Sur that he can fly to.

Altman’s doomsday vision of AI gone wrong is not uncommon in Silicon Valley. No tech billionaire worth his salt doesn’t have a post-apocalyptic contingency plan and remote bunker. Both Peter Thiel and Google co-founder Larry Page have snapped up land in New Zealand and built bunkers. They literally have private jets filled with fuel – which they don’t use – and pilots paid to wait for this ‘nuclear moment’.


This is some feedback I got from a client on my  my new Keynote on AI  – 

“Steve was amazing, awesome and all the superlatives. His insights on AI were absolutely incredible as was the feedback from our customers.”

It’s a revolution – Get me in – Don’t miss out!  


The AI Saviour?

Readers will know that I’m typically positive when it comes to the emancipating power of technology. And the last thing I want to be accused of is fear-mongering. There is a counter-argument to the worries about the AI threat:

We may not be able to survive without it. Really.

It seems to me that the probability of our species surviving other existential risks is greater than most experts’ AI P(Doom). The nuclear threat is still very real, and possibly greater than it ever was during the Cold War. While we theoretically control it, we can only count ourselves lucky that a crazed suicide bomber or rogue terrorist group hasn’t secured and deployed nuclear weapons.

Likewise, despite our progress with renewable energy, I can’t see progress by any large nation-state, which gives me the confidence to believe we can reduce our global emissions to the level needed before we reach a point of no return. We are still largely addicted to fossil fuels, GDP, economic growth, and consumption.

Maybe the thing we actually need is an all-omniscient, benevolent AI to save us from ourselves!

An AI which can uncover new forms of yet-to-be-discovered highly available energy, or ways to ensure we circumvent nuclear disaster via ‘Artificial Diplomacy’, an AI which can help us navigate species-level dangers which are already clear and present.

Keep Thinking,

Steve.