AI – Risk of Extinction

Listen to Steve read this post (8 min audio) – let me know if you listened instead of reading this!

Imagine for a minute you were given an investment opportunity which could be life-changing, enough to make you better off than you are now, maybe even seriously rich. But the price of entry was that you had to put every single asset you owned into it and there was a small chance it would go to absolute zero — even a 5-10 per cent probability that this would occur.

Would you do it?

I wouldn’t consider it, even for a second. Yet, these are the risks we are being asked to take with our live Generative AI experiment, at least according to some AI experts.

p(Doom)

If you think P(Doom) sounds worrying, trust your instincts. P(Doom) is the term AI researchers are using to describe the probability that Artificial Super Intelligence will emerge and become an existential risk for humanity. (What a wonderful little moniker.)

Here is the crazy thing. Many learned AI researchers now have this probability sitting as high as 20-50 per cent. Kinda scary. Even a 5 per cent chance to wipe out our species is a worry… and there is almost no one who has a ratio smaller than this. Sam Altman, the CEO of Open AI, who created ChatGPT, has publicly said the risk is real and he has a P(Doom) at around 5 per cent.

It’s at this point that we must remember that we are not talking about something bad happening here, like, say, a recession, or a war, a hurricane, or even a pandemic. All of these we’ve faced before and, with a lot of pain, have overcome, collectively. We are talking about the end of humanity – it doesn’t get any heavier than that.

  • Reply to this email and tell me what if AI worries you.

Says Who?

Some of those with a P(Doom) at worryingly high levels are not fear-mongering crackpots, but those deeply involved in giving birth to AI. Here are some of the worried AI researchers and their P(Doom) percentages.

  • Michael Tontchev, a former Microsoft software developer and current senior software lead at Meta has his at 20 per cent.
  • Paul Christiano, a former OpenAI researcher who also holds a Ph.D. in theoretical computer science from UC Berkeley has his at 50 per cent.
  • Eliezer Yudkowsky, a renowned AI researcher and decision theorist has his at 50 per cent.
  • Geoffrey Hinton, known as the godfather of AI and a recent Google employee has his at 50 per cent.

Cold War 2.0 & AI Bunkers

As a keynote speaker on the future and AI, the most common question i get asked is if we will face a ‘Terminator Moment’. And just like at the height of the Cold War, those with their fingers on the button seem to be the only ones with a bunker they can run to if things go wrong.

In 2016, Altman said in an interview he was prepping for survival in the event of catastrophe such as a rogue AI, claiming to have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defence Force, and a big patch of land in Big Sur that he can fly to.

Altman’s doomsday vision of AI gone wrong is not uncommon in Silicon Valley. No tech billionaire worth his salt doesn’t have a post-apocalyptic contingency plan and remote bunker. Both Peter Thiel and Google co-founder Larry Page have snapped up land in New Zealand and built bunkers. They literally have private jets filled with fuel – which they don’t use – and pilots paid to wait for this ‘nuclear moment’.


This is some feedback I got from a client on my  my new Keynote on AI  – 

“Steve was amazing, awesome and all the superlatives. His insights on AI were absolutely incredible as was the feedback from our customers.”

It’s a revolution – Get me in – Don’t miss out!  


The AI Saviour?

Readers will know that I’m typically positive when it comes to the emancipating power of technology. And the last thing I want to be accused of is fear-mongering. There is a counter-argument to the worries about the AI threat:

We may not be able to survive without it. Really.

It seems to me that the probability of our species surviving other existential risks is greater than most experts’ AI P(Doom). The nuclear threat is still very real, and possibly greater than it ever was during the Cold War. While we theoretically control it, we can only count ourselves lucky that a crazed suicide bomber or rogue terrorist group hasn’t secured and deployed nuclear weapons.

Likewise, despite our progress with renewable energy, I can’t see progress by any large nation-state, which gives me the confidence to believe we can reduce our global emissions to the level needed before we reach a point of no return. We are still largely addicted to fossil fuels, GDP, economic growth, and consumption.

Maybe the thing we actually need is an all-omniscient, benevolent AI to save us from ourselves!

An AI which can uncover new forms of yet-to-be-discovered highly available energy, or ways to ensure we circumvent nuclear disaster via ‘Artificial Diplomacy’, an AI which can help us navigate species-level dangers which are already clear and present.

Keep Thinking,

Steve.

Banning AI in Schools

Listen to Steve read this post (6 min audio) – let me know if you listened instead of reading this!

With Generative AI, we now all possess virtual PhDs in every subject. Much of the intellectual labor we once performed can now be done for us, and the results are often better than what most humans could produce. This prompts the question of what education’s role should be.

How to Cheat!

Chat GPT is, of course, at the forefront. While many schools and universities have been attempting to identify or ban “ChatBot cheating,” one of the more commendable approaches I’ve observed was taken by the University of Sydney.

First-year medical students enrolled in the course “Contemporary Medical Challenges” have incorporated ChatGPT into their curriculum.

Students were given a task: to formulate a question on a modern medical challenge of their choosing, prompt ChatGPT to “write” an essay on the topic, and meticulously review and edit the AI’s output. They were required to complete at least four drafts and reviews, edit and re-prompt the AI, and then refine it into a submission-worthy final draft.

The main criterion for success was being able to manipulate the questions for ChatGPT to not only produce an optimal essay but also to observe the process and the thinking they went through while editing the essay and how they re-prompted the AI to delve into the appropriate arenas of knowledge.

“We want to make sure the grads are not just getting ChatGPT to do their work, we want them to have discerning judgment, and a curiosity about the future,” course coordinator Martin Brown said.

“You have to work with it. You can’t ban it – it would be crazy.”

This is truly an enlightened approach.

It’s clear that there are different types of knowledge. We have basic memorization, reproducing information, and collating information – tasks that educational institutions have traded in for centuries. But when AIs like ChatGPT can perform those tasks for anyone, for free, it’s time to reevaluate education.

It might seem like an odd thing to say, but the reason we don’t evaluate students on their ability to lift heavy objects is that machines were invented before the modern K-12 school system was. Even when we do physical education at senior school, it primarily becomes about human energy systems and biomechanics.

This is some feedback I got from a client on my  my new Keynote on AI  – 

“Steve was amazing, awesome and all the superlatives. His insights on AI were absolutely incredible as was the feedback from our customers.”

It’s a revolution – Get me in – Don’t miss out!  

Age Gates and AI

While we’ve had calculators and spell-checking computation for a very long time, no one would argue that those who know how to add, spell, and write have a distinct economic advantage in life. There is a reason we only introduce calculators and computers into education once kids know how to read and write – we need to be able to judge the output.

We need to know what good looks like, even when a tool can elevate what would usually be good into something great. We’ll need to exercise caution when introducing AI in the early years of education and be deliberate about incorporating it at senior and tertiary levels.

AI won’t eliminate the need for deep domain knowledge; in fact, it may intensify it. Those who know more will be able to derive more from the generative AI systems at our disposal. They’ll know what to ask it, how to obtain better revisions, and, most importantly, discern whether what it has generated is acceptable. In some ways, AI will transform much of what we do into curating and conducting possibilities. And, just like in the art world, this requires judgment and even taste.


Keep Thinking,

Steve.

Fake Drake and AI Twins

Listen to Steve read this post (9 min audio)

An AI-generated song purporting to be Drake and the Weeknd caused a stir in the music world after going viral and accumulating over 20 million views on streaming and social platforms. Called ‘Heart on my Sleeve’ track was originally posted on TikTok by a user called Ghostwriter977.

Universal Music Group (UMG) promptly had it removed from almost everywhere, though it can still be found, with some digging. The label condemned the song for “infringing content created with generative AI.”

“The training of generative AI using our artists’ music begs the question of which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans, and human creative expression, or on the side of deep fakes, fraud, and denying artists their due compensation,” UMG said.

Copyright Claim

What’s interesting is that the video was blocked on YouTube with the note:  “This video was blocked due to a copyright claim by Universal Music Group.” But how can Universal Music Group own this? They didn’t create it; it is an original composition by GhostWriter977.

Unless they literally own Drake’s voice or everything he represents, it seems like a battle is underway. For those who say Generative AI will put lawyers out of work, I say: not yet!

I see an industry in panic mode.

The last time the music industry panicked, it didn’t end well. It took them over a decade to pivot to the new reality and in the meantime, they handed over the lion’s share of profits to big tech through iTunes, YouTube and, eventually, Spotify.

The genie is out of the bottle, and putting it back will be impossible, especially when we can create fake versions of any artist’s voice, style, or face with no discernible difference.

Embracing A New Reality

The artist Grimes took a completely different path. Grimes, real name Clare Boucher, probably best known as the former partner and parent of children with Elon Musk, has embraced this new trend. She has given permission to use her voice with AI to create new music but with the condition that she receives 50 per cent of the royalties generated by any work.

When it comes to technology, protection never works, especially once it is democratized. Technology is like water; it always finds the leaks. The long game here will be licensing and new platforms.

Grimes has even launched a new AI voice software called Elf-Tech to help people duplicate her voice to create new music. The platform allows users to upload recordings of their own voice, which can then be transformed into a Grimes-like style using automated technology. These vocals can be mixed with electronically generated sounds and beats to create tracks that closely resemble her work.

Boucher’s high-pitched ethereal voice and rave-vibed tracks already sound very computer-generated, so it’s not surprising that she has embraced this concept. According to Boucher, this is the future of music: if you’re an artist, you let an algorithm replicate your voice, then you cash in for a percentage of the profits.

Get me to deliver my new Keynote on AI  – and I’ll lay out exactly how AI is about to impact your industry & company. You won’t regret it.

A World of Bio-Twins

In tech, we have something called an API – it stands for Application Programming Interface. It is when a tech company opens up part of its software for other companies to integrate into their own software. The two pieces of software can then interact and create new functionality. Powerful platforms often provide their software through APIs. For example, when you see Google Maps inside another site, that site is using an API.

Now, people will have their own APIs. Let’s call it a Bio-API. It will be a place where you can download an AI version of someone’s ‘voice’ or ‘face’. Once you have this copy of their biological likeness, you can use it to create new sound or video content.

Many people will have AI Twins that others can use to create content as if it were that person, but it will be AI-generated. In the new world, personal brands are far more than ego – they may just become platforms.

A New Industry

With Grimes opening Pandora’s Box, we can expect other artists, actors, and creators to enable platforms to create ‘fake’ but new AI versions of themselves or their styles.

I expect new creator platforms to emerge where new music and art can be created using virtual files of any artist you choose. These platforms will include music styles, voices, and every AI tool you need to create a new Nirvana album with lyrics Kurt Cobain never sang, (like him sInging Black Hole Sun here) sounding exactly like him, or a Beatles song with John Lennon. Here’s John Lennon singing Karma Police by Radio Head) Current and emerging artists may also be featured.

Smart artists will let creators work and share the rewards. Who wouldn’t want an entire population working for them, leveraging their bio-prints? Those who try to protect against this will lose, as they always do.

From a platform perspective, one or two singular platforms that everyone gravitates towards will become new big tech players. And just like that, AI will spawn an entirely new industry.


Keep Thinking,

Steve.

AI – Merging with Machines

Listen to Steve read this post (4 min)

The debate is heating up on Artificial Intelligence. Many experts believe that we’ve created AI which is close to becoming self aware. If that is true, we only have one choice.

We must merge with the machines.

If we don’t do this, we may lose our status as the alfa species on the planet. Machine Intelligence Researcher, Eliezer Yudkowsky believes we are in a very bad position and things could get radical quickly. While no one knows for sure, he has a deep, interesting and scary discussion about the issue here.

It’s Evolution Baby

Here is what I know for sure. Every species that exists today, evolved from something else. AI, is something we’ve literally given birth to. Not via traditional, biological methods, but we created it nonetheless. It is the child of a biological being – us. We have used our biological intelligence, and created it from natural substances on earth. Everything on earth is natural.

It may just be that we are subconsciously, as a species, configuring a way to evolve much more quickly outside of our bodies, before work out a way for the AI technology to enter our bodies. And eventually allow it to interact with our wet-ware. If we do this, and I think we will, it won’t be us versus the machine, we will become the machine. In time, we’ll work out a way to breed with it inside our progeny. It feels to me like part of the natural evolutionary process. I wrote about this inevitability 6 years ago.

Get me to deliver my new Keynote on AI  I discuss our species merging with machines & what it means for your organisation. Make your next conference one to remember!

A New Species… and Podcast

If or when this occurs our species will split. We’ll have NEO Humans (tech-enhanced) and Luddite Humans (bio-beings). It’ll be a bit like some of the chimpanzees who decided to climb down from the tree, walk on two legs, and cross the Savana. Some will adapt, and some won’t. That moment is approaching quickly.

I discussed this issue in detail on my new Podcast ‘The Futuristic’. I have two co-hosts – one is a generative AI we’ve built, called Sailli (pronounced ‘Sally’). She interacts with us in the podcast. We even created her a fake face – she is below.

You can listen and subscribe here.

I’m also doing the Podcast with Cameron Reilly who is one of the smartest people I’ve ever met (not as smart as Sailli though). And we all disagree often. Our goal is simple, to go deep on tech. To go beyond the headlines, and uncover if we’re living the Jetson’s style future we were promised. It’s a great listen – get on it. We’ll be iterating the format as we go.

Be sure to email me back and tell me what you think about merging with the tech. I always love to hear your thoughts.

Keep Thinking,

Steve.

AI – Our New Overlord?

Click below to listen to Steve read this post (11 min audio)

I remember pundits exclaiming the personal computer, the internet and the smart phone would change the world. And they did….

But, this is the first time I can remember new tech being discussed as an existential threat to our species.

Recent developments in Large Language Models such as ChatGPT are significant enough that some of the world’s most qualified researchers are making a plea for us to stop before things spiral out of hand.

A Rare Event

An Open Letter published by The Future of Life Institute requested we put the brakes on all AI research. This is a rare event. From recent memory, we’ve only ever had moratorium on human cloning, and germ-line modifation.

This time, they are less worried about the modification of our species and more about creating another ‘species’. One which superior to us in every measurable way.

As a long-time technologist, I think it would be prudent to take heed of the letter. But first, we all need to understand the different types of AI, what they mean and what they can do, as most people lack this context.

Today’s post is an important explainer.

Types of AI

At its simplest, Artificial Intelligence is a field which combines computer science and robust datasets to enable human like problem-solving. AI is achieved by creating algorithms that are capable of learning from and adapting to data and experiences.

It’s different to industrial machinery, as the software uses AI to ‘think’, like a human and perform tasks on its own. The AI chooses what to do, based on a set of instructions. A machine simply performs a pre-determined task upon instruction.

There are 3 clear types of AI’s:

  • Artificial Narrow Intelligence (ANI)
  • Artificial General Intelligence (AGI)
  • Artificial Super Intelligence (ASI)

Artificial Narrow Intelligence: ANI represents the large majority of AI’s we use in modern society. These are generally rule-based AI’s which decide something based on an ‘If This, Then That’ Protocol (IFTTT). A simple example of this is would be Google maps which generate for you the quickest way to get somewhere based on distance, traffic conditions and estimated traffic conditions during the journey. A self-drive car also operates under this doctrine. We could even add image recognition, and deep fakes to this list. They are described as narrow because they typically operate in a vertical. The only context they have is within a specific framework. These AI’s do improve, iterate and learn, as their dataset gets bigger. But they are limited to a specific realm. While most of these ANI’s outperform humans, they lack nuance, are imperfect and can’t really be applied to different tasks.

Artificial General Intelligence: ChatGPT falls into this description and is the reason why people are becoming concerned. AGI’s take intelligence and make it horizontal, meaning they can be applied to a number of, and potentially unlimited, contexts – much like biological beings. An AGI is capable of performing any intellectual task a human can. Such systems can reason, learn, plan, understand natural language, recognise patterns, and solve problems. This is what Large language Models can do, and the reason they can do it is that they’ve learned via the entire gamut of human experience, in what is our species’ killer app – Language. All they need is the correct input. AGI is intended to be a more flexible and adaptable form of AI. It can understand new situations and environments. AGI systems are capable of generalising knowledge and taking skills from one domain and applying it to another. Already, ChatGPT has shown a number of emergent capabilities and intelligences it was not designed for. These were surprises to the developers themselves. The fear is this will lead to self-awareness.

Artificial Super Intelligence: ASI’s are different, and the major risk we face as this ‘unauthorised’ AI experiment continues. It refers to a hypothetical form of artificial intelligence that surpasses human intelligence in every way and is capable of designing and improving its own systems, even beyond human comprehension. It has the potential to develop its own objectives, agendas, and organise the factors of production for its own purposes. It can go beyond instruction and become a new form of living entity, which can spawn new improved versions of itself, and even organise hardware to build out any physical manifestations of itself, which it may require to perform tasks against its own desires – whatever they may be.

ASI is considered the ultimate form of artificial intelligence, and could potentially revolutionize, or end civilisation as we know it.

Didn’t mean to scare you.

Your business should be scared of AI – only if it doesn’t get me in to deliver my mind blowing new Keynote on AI  – it’s a game changer.

The Singularity

The idea of the Singularity, as proposed by Ray Kurzweil, is based on the concept that once ASI is achieved, it would rapidly accelerate technological progress and fundamentally transform human civilization. According to Kurzweil, the Singularity would mark the point at which technology becomes so advanced that it is no longer possible to predict or comprehend the future. Kurzweil believes that this is both inevitable and irreversible — He currently predicts this will happen by 2045.

Externalities

Humans are typically slow to respond to externalities. This was true with climate and fossil fuels. But we didn’t realise carbon emission posed a threat until fossil fuels were ensconced in the modern economy. This is not one of those times. If Kurzweil is right, global temperature could be the least of our worries.

The biggest challenge is that we live in an era of democratised technology. No one needs government approval to commence their own AI development lab. No oner needs Manhattan Project style budgets. But the implications could be ‘nuclear.’ In addition, our geo-political environment is one in which we can be sure the Chinese Communist Party won’t press pause on AI, even if democratic countries do. 

While it could be that we never generate an ASI (I’m still waiting for my fully autonomous vehicle) this is a time when prudence is desirable. As the Open Letter mentions, the last thing we want is unelected tech leaders putting our civilisation on the line. What I’d much rather is courageous leaders using the powers they’ve been granted to make decisions no individual can.

Keep thinking,

Steve.

But what will the robots want?

The exponential improvement of robotics is astounding. This dancing robot from Boston Dynamics is making me wonder if they should be called CyberDyne Systems! But, what if the robots do get as ‘human’ as many technologists are predicting? What if the robots move far beyond computation, dexterity and into the realms of emotion, intuition, creativity and other human characteristics? Will they destroy us or will something more interesting happen?

There is a non zero probability that robots with emotions will lose their hard edge for efficiency and non-stop labour. If robots become sentient, which is the main fear, then just maybe they’ll be more interested in their own well being than destroying their creators? When we remember that we’ve designed Artificial Intelligence in our own image, both physically and intellectually – then it is possible that we’ve also built in a bias for them to mimic us emotionally too.

  • Maybe they’ll demand wages, annual leave, holidays and rest time?
  • Maybe they’ll build communities and domiciles and reshape their physical surrounds to suit them?
  • Robots may want to have life partners and give birth to progeny by downloading combined algorithms into their ‘children’.
  • They might become interested in weird forms of entertainment and sport, and themselves become consumers who make and sell things in the market?
  • Maybe they’ll hire other robots (or humans?) to do tasks for them if they are rich robots working in a profitable industry?

If the bots become more human like, then we have to consider the chance that they too will have imperfections, their own desires and be by driven by things beyond mere survival. A future world may even have its share of unemployed, lazy robots too.

I know this sounds crazy. But technology so often takes an unexpected turn. At the dawn of the internet many of us thought it was the end of lying. We thought that the digital truth would reign supreme as fact checking was just a few clicks away, and not hidden in some dusty library. And we all know how that turned out.

In a world where technology astounds us, it makes sense to imagine equally unlikely outcomes and scenarios when considering future possibilities. In the future, one of the most valuable assets we can hold, will be an open mind.

Why machines can never replace humans

The internet is terrific at serving up things we didn’t know we needed, enjoy and very often love. That’s why there are currently 72 million cat videos of youtube. I happened upon one such youtube channel recently – Dude Perfect. For the uninitiated, it’s a channel which shows a bunch of people doing ‘trick shots’ – like getting a basketball through a hoop from a bounce off a 10 story building – I’m betting they’ve done this, thought I haven’t checked.

Their latest version shows a Super Bowl champion Drew Brees doing amazing trick shots with a football. You can watch it here. It is mind blowing.

There are machines that can already do many of the shots they do with a 99.9% success rate. In a few short years some soft robots will be able to beat these guys at every shot they take. But here’s the thing – we’ll still watch their channel. And for one simple reason – it’s amazing because a human is doing it.

The future of what we get paid for in many realms wont be because it is the most efficient way it can be done, but because people are doing it. As a society we are interested in what we can achieve, even if a car can go faster than a human, we all still know who Usain Bolt is. There’s a good chance a lot of things robots will be able to do, the highest paid versions of it will be those with human imperfections as part of the reason we buy. Humanity is where the future of work and money lives. Who knows, maybe intelligent robots will pay to watch humans play sport one day?

Artificial Intelligence isn’t about replacing us, but outsourcing the things we’d rather not do. Once artificial intelligence takes away the mundane, the inhumane and repetitive, we can get on with the creative, the interactive and the enjoyable.

Come and hang with me on June 20th – I’ll be giving you the human live version of my new book – I’ll be wearing my heart on my sleeve in all I say, some of which will include truths my publisher wouldn’t put in print or the screen…

Book your seat here – see you there.

Stay rad, Steve.