AI – Our New Overlord?

Click below to listen to Steve read this post (11 min audio)

I remember pundits exclaiming the personal computer, the internet and the smart phone would change the world. And they did….

But, this is the first time I can remember new tech being discussed as an existential threat to our species.

Recent developments in Large Language Models such as ChatGPT are significant enough that some of the world’s most qualified researchers are making a plea for us to stop before things spiral out of hand.

A Rare Event

An Open Letter published by The Future of Life Institute requested we put the brakes on all AI research. This is a rare event. From recent memory, we’ve only ever had moratorium on human cloning, and germ-line modifation.

This time, they are less worried about the modification of our species and more about creating another ‘species’. One which superior to us in every measurable way.

As a long-time technologist, I think it would be prudent to take heed of the letter. But first, we all need to understand the different types of AI, what they mean and what they can do, as most people lack this context.

Today’s post is an important explainer.

Types of AI

At its simplest, Artificial Intelligence is a field which combines computer science and robust datasets to enable human like problem-solving. AI is achieved by creating algorithms that are capable of learning from and adapting to data and experiences.

It’s different to industrial machinery, as the software uses AI to ‘think’, like a human and perform tasks on its own. The AI chooses what to do, based on a set of instructions. A machine simply performs a pre-determined task upon instruction.

There are 3 clear types of AI’s:

  • Artificial Narrow Intelligence (ANI)
  • Artificial General Intelligence (AGI)
  • Artificial Super Intelligence (ASI)

Artificial Narrow Intelligence: ANI represents the large majority of AI’s we use in modern society. These are generally rule-based AI’s which decide something based on an ‘If This, Then That’ Protocol (IFTTT). A simple example of this is would be Google maps which generate for you the quickest way to get somewhere based on distance, traffic conditions and estimated traffic conditions during the journey. A self-drive car also operates under this doctrine. We could even add image recognition, and deep fakes to this list. They are described as narrow because they typically operate in a vertical. The only context they have is within a specific framework. These AI’s do improve, iterate and learn, as their dataset gets bigger. But they are limited to a specific realm. While most of these ANI’s outperform humans, they lack nuance, are imperfect and can’t really be applied to different tasks.

Artificial General Intelligence: ChatGPT falls into this description and is the reason why people are becoming concerned. AGI’s take intelligence and make it horizontal, meaning they can be applied to a number of, and potentially unlimited, contexts – much like biological beings. An AGI is capable of performing any intellectual task a human can. Such systems can reason, learn, plan, understand natural language, recognise patterns, and solve problems. This is what Large language Models can do, and the reason they can do it is that they’ve learned via the entire gamut of human experience, in what is our species’ killer app – Language. All they need is the correct input. AGI is intended to be a more flexible and adaptable form of AI. It can understand new situations and environments. AGI systems are capable of generalising knowledge and taking skills from one domain and applying it to another. Already, ChatGPT has shown a number of emergent capabilities and intelligences it was not designed for. These were surprises to the developers themselves. The fear is this will lead to self-awareness.

Artificial Super Intelligence: ASI’s are different, and the major risk we face as this ‘unauthorised’ AI experiment continues. It refers to a hypothetical form of artificial intelligence that surpasses human intelligence in every way and is capable of designing and improving its own systems, even beyond human comprehension. It has the potential to develop its own objectives, agendas, and organise the factors of production for its own purposes. It can go beyond instruction and become a new form of living entity, which can spawn new improved versions of itself, and even organise hardware to build out any physical manifestations of itself, which it may require to perform tasks against its own desires – whatever they may be.

ASI is considered the ultimate form of artificial intelligence, and could potentially revolutionize, or end civilisation as we know it.

Didn’t mean to scare you.

Your business should be scared of AI – only if it doesn’t get me in to deliver my mind blowing new Keynote on AI  – it’s a game changer.

The Singularity

The idea of the Singularity, as proposed by Ray Kurzweil, is based on the concept that once ASI is achieved, it would rapidly accelerate technological progress and fundamentally transform human civilization. According to Kurzweil, the Singularity would mark the point at which technology becomes so advanced that it is no longer possible to predict or comprehend the future. Kurzweil believes that this is both inevitable and irreversible — He currently predicts this will happen by 2045.

Externalities

Humans are typically slow to respond to externalities. This was true with climate and fossil fuels. But we didn’t realise carbon emission posed a threat until fossil fuels were ensconced in the modern economy. This is not one of those times. If Kurzweil is right, global temperature could be the least of our worries.

The biggest challenge is that we live in an era of democratised technology. No one needs government approval to commence their own AI development lab. No oner needs Manhattan Project style budgets. But the implications could be ‘nuclear.’ In addition, our geo-political environment is one in which we can be sure the Chinese Communist Party won’t press pause on AI, even if democratic countries do. 

While it could be that we never generate an ASI (I’m still waiting for my fully autonomous vehicle) this is a time when prudence is desirable. As the Open Letter mentions, the last thing we want is unelected tech leaders putting our civilisation on the line. What I’d much rather is courageous leaders using the powers they’ve been granted to make decisions no individual can.

Keep thinking,

Steve.