Elections in the AI era

Listen to Steve read this post (7 min audio) – let me know if you listened instead of reading this!

“Personally I think the idea that fake news on Facebook, influenced the election in any way — is a pretty crazy idea.” – Mark Zuckerberg, 2016.

The echoes of Zuckerberg’s statement back in 2016 resonate loudly today.

What may sound less crazy now is this: The 2024 US election cycle could possibly be the first authentic AI election. Donald Trump and Gov. Ron DeSantis have already used fake AI images of each other and we are still 16 months out from the vote.

This is an election where the capabilities of Generative AI will quite simply have a massive influence. We understand its potential, we anticipate its use, yet governments appear largely indifferent to its probable role in election campaigns. The potential repercussions on geopolitics and the global economy are staggering.

It’s almost a certainty that the forthcoming election will transcend the era of Facebook ads. We are venturing into a realm where an underlying uncertainty will accompany everything we witness. Our world has advanced past glitchy fakes into a phase that we term in AI as – ‘No Noticeable Difference’.

Democratization of Disinformation

Unless we personally attended an event, we may forever question its authenticity. Considering this evolving reality, along with the apparent legislative apathy, it’s crucial that we understand the redefined landscape of politics in the AI era.

Historically, election interference has been a costly and challenging endeavour, typically orchestrated by rogue states like Russia, China, and North Korea. And now, anyone with a laptop and personal election agenda can fabricate anything they choose.


This week Sir Bob Geldof interviewed me about AI.

So if you get me in to do my new Keynote on AI  – you will be in very good company indeed.

He was astounded at what’s in store – Don’t miss out!


The Perfect Market

Generative AI thrives on the data set it’s trained on. In this context, politics is exceptionally vulnerable. The public life led by candidates offers a treasure trove of data, with endless recordings across media platforms providing rich training material. Combine this with the readily available Generative AI tools capable of generating near-perfect duplicates of voice, video and images, and the populace at large possesses the means to create what could be accepted as ‘real’ by an unsuspecting voter.

The true democratization of a technology is signified when it becomes integral to the political process. We’ve witnessed this evolution with print, radio, TV, and social media. The next stage in this progression is the advent of Generative AI.

Electoral manipulation is set to intensify exponentially. Digital falsifications will be of higher quality. Microtargeting will be significantly more potent, possibly reaching the granularity of individuals.

Political advertising can graduate from being hastily cobbled together talk pieces, to cinematic-quality productions potentially painting dystopian visions of the opposition in power, thus inducing irrational fear. Those who once lacked resources can now participate on par with nation states.

However, it’s the subtler uses and their societal implications we ought focus on.

Subtle Social Implications

For conscientious voters, we used to be able spot a fabricated picture, video, or statement. Especially if the political event or speech never transpired. Mainstream media generally does a commendable job at fact-checking in this regard.

But, if present-day AI is used to redub a speech, subtly modifying a few words or a sentence to distort a candidate’s message, with impeccably mimicked facial movements, the game changes entirely. Consider this happening after a presidential debate, with slightly manipulated footage making its way into the multitude of YouTube highlights reels. Discerning the truth from the fiction could become difficult indeed.

Once fake information permeates the political landscape, we risk descending into an era where cynicism usurps belief in everything we see. In such an environment, a candidate could deny anything claiming it is an AI-generated fiction. This would especially be the case with controversial or scandalous footage, they never wanted the public to see.

Post Media Reality

A crazy idea I have, is that we may enter a new post-media era. If the only things we know truly occurred, are those we witnessed with our own eyes, it would be akin to living again, in a world without newsprint, radio, Tv or the internet. When everything might not be real there’s a very good chance we won’t pay attention to any of it. And if that happens, the attention economy could end up eating itself out of existence.

—-

Keep Thinking,

Steve.

AI – Risk of Extinction

Listen to Steve read this post (8 min audio) – let me know if you listened instead of reading this!

Imagine for a minute you were given an investment opportunity which could be life-changing, enough to make you better off than you are now, maybe even seriously rich. But the price of entry was that you had to put every single asset you owned into it and there was a small chance it would go to absolute zero — even a 5-10 per cent probability that this would occur.

Would you do it?

I wouldn’t consider it, even for a second. Yet, these are the risks we are being asked to take with our live Generative AI experiment, at least according to some AI experts.

p(Doom)

If you think P(Doom) sounds worrying, trust your instincts. P(Doom) is the term AI researchers are using to describe the probability that Artificial Super Intelligence will emerge and become an existential risk for humanity. (What a wonderful little moniker.)

Here is the crazy thing. Many learned AI researchers now have this probability sitting as high as 20-50 per cent. Kinda scary. Even a 5 per cent chance to wipe out our species is a worry… and there is almost no one who has a ratio smaller than this. Sam Altman, the CEO of Open AI, who created ChatGPT, has publicly said the risk is real and he has a P(Doom) at around 5 per cent.

It’s at this point that we must remember that we are not talking about something bad happening here, like, say, a recession, or a war, a hurricane, or even a pandemic. All of these we’ve faced before and, with a lot of pain, have overcome, collectively. We are talking about the end of humanity – it doesn’t get any heavier than that.

  • Reply to this email and tell me what if AI worries you.

Says Who?

Some of those with a P(Doom) at worryingly high levels are not fear-mongering crackpots, but those deeply involved in giving birth to AI. Here are some of the worried AI researchers and their P(Doom) percentages.

  • Michael Tontchev, a former Microsoft software developer and current senior software lead at Meta has his at 20 per cent.
  • Paul Christiano, a former OpenAI researcher who also holds a Ph.D. in theoretical computer science from UC Berkeley has his at 50 per cent.
  • Eliezer Yudkowsky, a renowned AI researcher and decision theorist has his at 50 per cent.
  • Geoffrey Hinton, known as the godfather of AI and a recent Google employee has his at 50 per cent.

Cold War 2.0 & AI Bunkers

As a keynote speaker on the future and AI, the most common question i get asked is if we will face a ‘Terminator Moment’. And just like at the height of the Cold War, those with their fingers on the button seem to be the only ones with a bunker they can run to if things go wrong.

In 2016, Altman said in an interview he was prepping for survival in the event of catastrophe such as a rogue AI, claiming to have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defence Force, and a big patch of land in Big Sur that he can fly to.

Altman’s doomsday vision of AI gone wrong is not uncommon in Silicon Valley. No tech billionaire worth his salt doesn’t have a post-apocalyptic contingency plan and remote bunker. Both Peter Thiel and Google co-founder Larry Page have snapped up land in New Zealand and built bunkers. They literally have private jets filled with fuel – which they don’t use – and pilots paid to wait for this ‘nuclear moment’.


This is some feedback I got from a client on my  my new Keynote on AI  – 

“Steve was amazing, awesome and all the superlatives. His insights on AI were absolutely incredible as was the feedback from our customers.”

It’s a revolution – Get me in – Don’t miss out!  


The AI Saviour?

Readers will know that I’m typically positive when it comes to the emancipating power of technology. And the last thing I want to be accused of is fear-mongering. There is a counter-argument to the worries about the AI threat:

We may not be able to survive without it. Really.

It seems to me that the probability of our species surviving other existential risks is greater than most experts’ AI P(Doom). The nuclear threat is still very real, and possibly greater than it ever was during the Cold War. While we theoretically control it, we can only count ourselves lucky that a crazed suicide bomber or rogue terrorist group hasn’t secured and deployed nuclear weapons.

Likewise, despite our progress with renewable energy, I can’t see progress by any large nation-state, which gives me the confidence to believe we can reduce our global emissions to the level needed before we reach a point of no return. We are still largely addicted to fossil fuels, GDP, economic growth, and consumption.

Maybe the thing we actually need is an all-omniscient, benevolent AI to save us from ourselves!

An AI which can uncover new forms of yet-to-be-discovered highly available energy, or ways to ensure we circumvent nuclear disaster via ‘Artificial Diplomacy’, an AI which can help us navigate species-level dangers which are already clear and present.

Keep Thinking,

Steve.

Banning AI in Schools

Listen to Steve read this post (6 min audio) – let me know if you listened instead of reading this!

With Generative AI, we now all possess virtual PhDs in every subject. Much of the intellectual labor we once performed can now be done for us, and the results are often better than what most humans could produce. This prompts the question of what education’s role should be.

How to Cheat!

Chat GPT is, of course, at the forefront. While many schools and universities have been attempting to identify or ban “ChatBot cheating,” one of the more commendable approaches I’ve observed was taken by the University of Sydney.

First-year medical students enrolled in the course “Contemporary Medical Challenges” have incorporated ChatGPT into their curriculum.

Students were given a task: to formulate a question on a modern medical challenge of their choosing, prompt ChatGPT to “write” an essay on the topic, and meticulously review and edit the AI’s output. They were required to complete at least four drafts and reviews, edit and re-prompt the AI, and then refine it into a submission-worthy final draft.

The main criterion for success was being able to manipulate the questions for ChatGPT to not only produce an optimal essay but also to observe the process and the thinking they went through while editing the essay and how they re-prompted the AI to delve into the appropriate arenas of knowledge.

“We want to make sure the grads are not just getting ChatGPT to do their work, we want them to have discerning judgment, and a curiosity about the future,” course coordinator Martin Brown said.

“You have to work with it. You can’t ban it – it would be crazy.”

This is truly an enlightened approach.

It’s clear that there are different types of knowledge. We have basic memorization, reproducing information, and collating information – tasks that educational institutions have traded in for centuries. But when AIs like ChatGPT can perform those tasks for anyone, for free, it’s time to reevaluate education.

It might seem like an odd thing to say, but the reason we don’t evaluate students on their ability to lift heavy objects is that machines were invented before the modern K-12 school system was. Even when we do physical education at senior school, it primarily becomes about human energy systems and biomechanics.

This is some feedback I got from a client on my  my new Keynote on AI  – 

“Steve was amazing, awesome and all the superlatives. His insights on AI were absolutely incredible as was the feedback from our customers.”

It’s a revolution – Get me in – Don’t miss out!  

Age Gates and AI

While we’ve had calculators and spell-checking computation for a very long time, no one would argue that those who know how to add, spell, and write have a distinct economic advantage in life. There is a reason we only introduce calculators and computers into education once kids know how to read and write – we need to be able to judge the output.

We need to know what good looks like, even when a tool can elevate what would usually be good into something great. We’ll need to exercise caution when introducing AI in the early years of education and be deliberate about incorporating it at senior and tertiary levels.

AI won’t eliminate the need for deep domain knowledge; in fact, it may intensify it. Those who know more will be able to derive more from the generative AI systems at our disposal. They’ll know what to ask it, how to obtain better revisions, and, most importantly, discern whether what it has generated is acceptable. In some ways, AI will transform much of what we do into curating and conducting possibilities. And, just like in the art world, this requires judgment and even taste.


Keep Thinking,

Steve.

Fake Drake and AI Twins

Listen to Steve read this post (9 min audio)

An AI-generated song purporting to be Drake and the Weeknd caused a stir in the music world after going viral and accumulating over 20 million views on streaming and social platforms. Called ‘Heart on my Sleeve’ track was originally posted on TikTok by a user called Ghostwriter977.

Universal Music Group (UMG) promptly had it removed from almost everywhere, though it can still be found, with some digging. The label condemned the song for “infringing content created with generative AI.”

“The training of generative AI using our artists’ music begs the question of which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans, and human creative expression, or on the side of deep fakes, fraud, and denying artists their due compensation,” UMG said.

Copyright Claim

What’s interesting is that the video was blocked on YouTube with the note:  “This video was blocked due to a copyright claim by Universal Music Group.” But how can Universal Music Group own this? They didn’t create it; it is an original composition by GhostWriter977.

Unless they literally own Drake’s voice or everything he represents, it seems like a battle is underway. For those who say Generative AI will put lawyers out of work, I say: not yet!

I see an industry in panic mode.

The last time the music industry panicked, it didn’t end well. It took them over a decade to pivot to the new reality and in the meantime, they handed over the lion’s share of profits to big tech through iTunes, YouTube and, eventually, Spotify.

The genie is out of the bottle, and putting it back will be impossible, especially when we can create fake versions of any artist’s voice, style, or face with no discernible difference.

Embracing A New Reality

The artist Grimes took a completely different path. Grimes, real name Clare Boucher, probably best known as the former partner and parent of children with Elon Musk, has embraced this new trend. She has given permission to use her voice with AI to create new music but with the condition that she receives 50 per cent of the royalties generated by any work.

When it comes to technology, protection never works, especially once it is democratized. Technology is like water; it always finds the leaks. The long game here will be licensing and new platforms.

Grimes has even launched a new AI voice software called Elf-Tech to help people duplicate her voice to create new music. The platform allows users to upload recordings of their own voice, which can then be transformed into a Grimes-like style using automated technology. These vocals can be mixed with electronically generated sounds and beats to create tracks that closely resemble her work.

Boucher’s high-pitched ethereal voice and rave-vibed tracks already sound very computer-generated, so it’s not surprising that she has embraced this concept. According to Boucher, this is the future of music: if you’re an artist, you let an algorithm replicate your voice, then you cash in for a percentage of the profits.

Get me to deliver my new Keynote on AI  – and I’ll lay out exactly how AI is about to impact your industry & company. You won’t regret it.

A World of Bio-Twins

In tech, we have something called an API – it stands for Application Programming Interface. It is when a tech company opens up part of its software for other companies to integrate into their own software. The two pieces of software can then interact and create new functionality. Powerful platforms often provide their software through APIs. For example, when you see Google Maps inside another site, that site is using an API.

Now, people will have their own APIs. Let’s call it a Bio-API. It will be a place where you can download an AI version of someone’s ‘voice’ or ‘face’. Once you have this copy of their biological likeness, you can use it to create new sound or video content.

Many people will have AI Twins that others can use to create content as if it were that person, but it will be AI-generated. In the new world, personal brands are far more than ego – they may just become platforms.

A New Industry

With Grimes opening Pandora’s Box, we can expect other artists, actors, and creators to enable platforms to create ‘fake’ but new AI versions of themselves or their styles.

I expect new creator platforms to emerge where new music and art can be created using virtual files of any artist you choose. These platforms will include music styles, voices, and every AI tool you need to create a new Nirvana album with lyrics Kurt Cobain never sang, (like him sInging Black Hole Sun here) sounding exactly like him, or a Beatles song with John Lennon. Here’s John Lennon singing Karma Police by Radio Head) Current and emerging artists may also be featured.

Smart artists will let creators work and share the rewards. Who wouldn’t want an entire population working for them, leveraging their bio-prints? Those who try to protect against this will lose, as they always do.

From a platform perspective, one or two singular platforms that everyone gravitates towards will become new big tech players. And just like that, AI will spawn an entirely new industry.


Keep Thinking,

Steve.

AI – Merging with Machines

Listen to Steve read this post (4 min)

The debate is heating up on Artificial Intelligence. Many experts believe that we’ve created AI which is close to becoming self aware. If that is true, we only have one choice.

We must merge with the machines.

If we don’t do this, we may lose our status as the alfa species on the planet. Machine Intelligence Researcher, Eliezer Yudkowsky believes we are in a very bad position and things could get radical quickly. While no one knows for sure, he has a deep, interesting and scary discussion about the issue here.

It’s Evolution Baby

Here is what I know for sure. Every species that exists today, evolved from something else. AI, is something we’ve literally given birth to. Not via traditional, biological methods, but we created it nonetheless. It is the child of a biological being – us. We have used our biological intelligence, and created it from natural substances on earth. Everything on earth is natural.

It may just be that we are subconsciously, as a species, configuring a way to evolve much more quickly outside of our bodies, before work out a way for the AI technology to enter our bodies. And eventually allow it to interact with our wet-ware. If we do this, and I think we will, it won’t be us versus the machine, we will become the machine. In time, we’ll work out a way to breed with it inside our progeny. It feels to me like part of the natural evolutionary process. I wrote about this inevitability 6 years ago.

Get me to deliver my new Keynote on AI  I discuss our species merging with machines & what it means for your organisation. Make your next conference one to remember!

A New Species… and Podcast

If or when this occurs our species will split. We’ll have NEO Humans (tech-enhanced) and Luddite Humans (bio-beings). It’ll be a bit like some of the chimpanzees who decided to climb down from the tree, walk on two legs, and cross the Savana. Some will adapt, and some won’t. That moment is approaching quickly.

I discussed this issue in detail on my new Podcast ‘The Futuristic’. I have two co-hosts – one is a generative AI we’ve built, called Sailli (pronounced ‘Sally’). She interacts with us in the podcast. We even created her a fake face – she is below.

You can listen and subscribe here.

I’m also doing the Podcast with Cameron Reilly who is one of the smartest people I’ve ever met (not as smart as Sailli though). And we all disagree often. Our goal is simple, to go deep on tech. To go beyond the headlines, and uncover if we’re living the Jetson’s style future we were promised. It’s a great listen – get on it. We’ll be iterating the format as we go.

Be sure to email me back and tell me what you think about merging with the tech. I always love to hear your thoughts.

Keep Thinking,

Steve.

AI – Our New Overlord?

Click below to listen to Steve read this post (11 min audio)

I remember pundits exclaiming the personal computer, the internet and the smart phone would change the world. And they did….

But, this is the first time I can remember new tech being discussed as an existential threat to our species.

Recent developments in Large Language Models such as ChatGPT are significant enough that some of the world’s most qualified researchers are making a plea for us to stop before things spiral out of hand.

A Rare Event

An Open Letter published by The Future of Life Institute requested we put the brakes on all AI research. This is a rare event. From recent memory, we’ve only ever had moratorium on human cloning, and germ-line modifation.

This time, they are less worried about the modification of our species and more about creating another ‘species’. One which superior to us in every measurable way.

As a long-time technologist, I think it would be prudent to take heed of the letter. But first, we all need to understand the different types of AI, what they mean and what they can do, as most people lack this context.

Today’s post is an important explainer.

Types of AI

At its simplest, Artificial Intelligence is a field which combines computer science and robust datasets to enable human like problem-solving. AI is achieved by creating algorithms that are capable of learning from and adapting to data and experiences.

It’s different to industrial machinery, as the software uses AI to ‘think’, like a human and perform tasks on its own. The AI chooses what to do, based on a set of instructions. A machine simply performs a pre-determined task upon instruction.

There are 3 clear types of AI’s:

  • Artificial Narrow Intelligence (ANI)
  • Artificial General Intelligence (AGI)
  • Artificial Super Intelligence (ASI)

Artificial Narrow Intelligence: ANI represents the large majority of AI’s we use in modern society. These are generally rule-based AI’s which decide something based on an ‘If This, Then That’ Protocol (IFTTT). A simple example of this is would be Google maps which generate for you the quickest way to get somewhere based on distance, traffic conditions and estimated traffic conditions during the journey. A self-drive car also operates under this doctrine. We could even add image recognition, and deep fakes to this list. They are described as narrow because they typically operate in a vertical. The only context they have is within a specific framework. These AI’s do improve, iterate and learn, as their dataset gets bigger. But they are limited to a specific realm. While most of these ANI’s outperform humans, they lack nuance, are imperfect and can’t really be applied to different tasks.

Artificial General Intelligence: ChatGPT falls into this description and is the reason why people are becoming concerned. AGI’s take intelligence and make it horizontal, meaning they can be applied to a number of, and potentially unlimited, contexts – much like biological beings. An AGI is capable of performing any intellectual task a human can. Such systems can reason, learn, plan, understand natural language, recognise patterns, and solve problems. This is what Large language Models can do, and the reason they can do it is that they’ve learned via the entire gamut of human experience, in what is our species’ killer app – Language. All they need is the correct input. AGI is intended to be a more flexible and adaptable form of AI. It can understand new situations and environments. AGI systems are capable of generalising knowledge and taking skills from one domain and applying it to another. Already, ChatGPT has shown a number of emergent capabilities and intelligences it was not designed for. These were surprises to the developers themselves. The fear is this will lead to self-awareness.

Artificial Super Intelligence: ASI’s are different, and the major risk we face as this ‘unauthorised’ AI experiment continues. It refers to a hypothetical form of artificial intelligence that surpasses human intelligence in every way and is capable of designing and improving its own systems, even beyond human comprehension. It has the potential to develop its own objectives, agendas, and organise the factors of production for its own purposes. It can go beyond instruction and become a new form of living entity, which can spawn new improved versions of itself, and even organise hardware to build out any physical manifestations of itself, which it may require to perform tasks against its own desires – whatever they may be.

ASI is considered the ultimate form of artificial intelligence, and could potentially revolutionize, or end civilisation as we know it.

Didn’t mean to scare you.

Your business should be scared of AI – only if it doesn’t get me in to deliver my mind blowing new Keynote on AI  – it’s a game changer.

The Singularity

The idea of the Singularity, as proposed by Ray Kurzweil, is based on the concept that once ASI is achieved, it would rapidly accelerate technological progress and fundamentally transform human civilization. According to Kurzweil, the Singularity would mark the point at which technology becomes so advanced that it is no longer possible to predict or comprehend the future. Kurzweil believes that this is both inevitable and irreversible — He currently predicts this will happen by 2045.

Externalities

Humans are typically slow to respond to externalities. This was true with climate and fossil fuels. But we didn’t realise carbon emission posed a threat until fossil fuels were ensconced in the modern economy. This is not one of those times. If Kurzweil is right, global temperature could be the least of our worries.

The biggest challenge is that we live in an era of democratised technology. No one needs government approval to commence their own AI development lab. No oner needs Manhattan Project style budgets. But the implications could be ‘nuclear.’ In addition, our geo-political environment is one in which we can be sure the Chinese Communist Party won’t press pause on AI, even if democratic countries do. 

While it could be that we never generate an ASI (I’m still waiting for my fully autonomous vehicle) this is a time when prudence is desirable. As the Open Letter mentions, the last thing we want is unelected tech leaders putting our civilisation on the line. What I’d much rather is courageous leaders using the powers they’ve been granted to make decisions no individual can.

Keep thinking,

Steve.

Disrupting Google

Business disruption is not caused by technology alone. For it to occur we need 2 things to arrive simultaneously.

(1) A new technology

+

(2) A new business model

If we only have one, the incumbents can usually adapt. They can plug the new tech into the existing business model. Or, they can revert the old technology into a new business model.

For example:

The Music Industry had 3 new technologies before they got disrupted. They had the phonograph, the tape and the CD. Each time they sold the new tech in the old business model. It wasn’t until the mp3 arrived until the industry changed. When that happened, the business model shifted with the tech, which resulted in disruption: Napster (stealing music) and Apple iTunes (buying music one song at a time). Then when streaming arrived, a further disruption occurred as both the tech and business model shifted once more. No one buys music, they subscribe to it.

Likewise, when the Airline Industry had low cost airlines arrive. A new business model emerged, but because it was utilising existing technology: planes, airports and booking engines, legacy players could plug in low cost sub-brands. No real industry disruption transpired.

Most Successful Consumer Product Launch in History

Chat GPT is the fastest-growing consumer product in history. It had over a million users in its first week and more than 100 million in two months. Previous technology juggernauts haven’t come close: TikTok took nine months to get to 100 million users, Instagram took nearly three years and Google took nearly two years to reach this milestone. It isn’t just the rapid growth of users of the platform that’s interesting. It’s that it demands a review of internet Search as we know it, how we perform searches literally and the resulting business model which underlies it. It may even redirect us away from advertising and the prevailing surveillance capitalism model.

The technology and business model just changed for search. Sounds crazy to say it, but Google could be in trouble. If there was ever a company which looked dominant and unstoppable mere months ago, it was Alphabet. Their Google search engine commands a 90%-plus share in most of the markets it operates in. Then along came ChatGPT.

Will your company be the Disrupter or the Disrupted with AI ? Get me in to share my mind blowing new Keynote Speech – and win in the new AI era.

Bing v Google

At the moment it looks like Open AI, the developers behind ChatGPT, have everything to gain, but behind the scenes is tech overlord Microsoft. If all goes to plan they could be the unexpected winner in AI, and there are literally trillions of dollars in market capitalisation at stake. Microsoft’s 23 January $10 billion investment in Open AI may well be the tech deal of the century. As a part of it Microsoft will have exclusive access to Open AI’s product suite, and will gain a 49% share of Open AI. However, Open AI will need to give back Microsoft 75% of the profits until Microsoft recoups its initial investment. Microsoft have already plugged ChatGPT into their Bing Search engine, and it is pretty damn good. I’ve switched already. But is isn’t just the product which puts google at risk, it’s the costs and business model.

The cost per ‘prompt’ on ChatGPT is currently around $0.02c. This is vastly more than the $0.00001 per Google search, and probably couldn’t support a pay per click or display advertising model. The recent option to subscribing to ChatGPT for $20 per month is a clue as to where the business model of Generative AI is likely to go – subscription rather than advertising. This would both remove the ‘free rider’ problem, and temptation to compromise product quality to appease the advertising model supporting it. Subscription is also needed because AI is far too expensive per prompt to run a pay per click model. This is a major problem for Google – which people use for free.

The market is likely to bifurcate into two segments: Search (Traditional web links) and Creation (Generative AI).

Think about it – if we shift our search habits to ask questions and getting an actual answer, rather than a page of links and options – the pay per click model could die alongside it. Bing might just become the world’s first Premium Search engine – a pay to play for a different kind of search.

The Code Red which was called in through halls of the GooglePlex hasn’t resulted in anything that seems like a worthy response to ChatGPT. After a failed demo last week of the Google AI chatbot Bard, it lost more than $100 billion in market cap. But I also wonder if the market senses that Google has far more to lose even if (and most likely when) it develops a competitive AI product. 58% percent of Alphabet’s revenue comes from search, which is driven by pay per click advertising, which simply can’t survive with generative AI – there are literally no clicks when you get a direct answer. Currently Microsoft only generates 5% of its revenue from Bing pay per click advertising. In real terms, it has a potential ten-fold search revenue upside, with near zero downside all the while potentially adding a new weapon to its already strong enterprise offers of Windows, Office and Azure. AI inside your own laptop, generating answers from your own personal data. That would be super powerful, personally and at an enterprise level.

Just when we thought we thought a one tech firm could never be usurped, a new technology comes along which potentially changes everything.

– – –

Keep Thinking,

Steve