Protesting Against AI

Listen to Steve read this post (8 min audio) – with bonus off piste commentary – give it a listen!

Union membership in Australia is in steep decline – it’s now less than 15 per cent of the workforce compared with 60 per cent in 1960. The dramatic decline has been great for many industries, corporations, and investors; not so good for the middle class and wages.

It’s pretty easy to tell how powerful a union is when their members go on strike. I can remember strikes by posties at Christmas, nurses, teachers, dairy farmers, railways, and bus drivers, to name a few. We noticed because they caused genuine disruption to our lives.

But here’s a strike that hasn’t really disrupted our lives, even if we have noticed – the Writers Guild of America Strike (WGA) in the USA.

A total of 112 days has passed in the WGA strike and yet, every night when I sit down on the couch, I am drowning in a tyranny of choice in what to stream. The only tiny miss is John Oliver’s Last Week Tonight.

While this strike action might seem a little irrelevant to Australia, it’s the shape of things to come, because AI is coming for the creative class. For more than a decade now, the narrative has been: efficiency is for robots, creativity is where humans shine.

But now there’s a revised and better assumption:

If a human can do it, so can Artificial Intelligence.

To be sure, creative output via code will be different, and not always better, but you can be sure it will be cheaper.


*Bonus – Want some advice on how to use AI in your business or career? Reply to this email for a 1 on 1 half an hour of power with me – first 3 only.


Wrong Time, Wrong Target

What the WGA is asking for is pretty standard: improved pay, conditions, superannuation, et al.

But there are two things that aren’t standard.

Firstly, they want a minimum number of writers in the writers’ room, which seems kind of insane. (Who gets to just have more people because they want it in any industry?) And secondly, they have demands about how AI is handled. They are requesting that AI can’t:

  • write or rewrite literary material.
  • be used as source material.
  • have material used to train AI.

So far, the studios have rejected the demands, and it is difficult to see them ever agreeing.

AI is essentially a gift-wrapped present for the studios and big tech, a pause button allowing them to recalibrate their spending in the midst of the streaming wars.

In 2022, streamers spent a whopping $US26.5 billion on original content production, a 45 per cent increase on 2021. That’s the kind of cost increase no sane business desires. And now, 2023 looks like it will see a decline in spend for the full year, given all the production that has been paused.

Meanwhile, the number of streaming accounts hasn’t declined at all; in fact, Video on Demand (VOD) as a category is expecting an increase of 65 million subscribers in 2023.

What we can assume is that the past 100+ days have been filled with live AI experiments to see how good AI-based production could actually be. Another thing the Writers Guild seems to have forgotten is that the biggest investors in their services also happen to be tech companies—yes, those building the AI technology that can replace the writers and even the actors.

There’s a lesson here for all forms of creative work.

Embracing AI

Fighting a technological tide is never a good strategy for workers. A better approach by the creative class would be to figure out how to share the upside of AI-generated creative content. What they should be demanding is royalties from what AI can generate, especially since the effectiveness of the AI machine is purely a function of the training data it learns from – i.e. the writers’ data. This should take the form of asking for protection of their “digital twins”—their biometric output, licensing fees for any content put into AI data sets, and of course, residuals from any “original” AI content resulting from output that was trained on their creative works.

The big problem, of course, is that with billions of parameters in every AI database, it is very difficult to align original content creators and divide up revenue where there is enough revenue to go around. This has been big tech’s greatest hack. They add a layer of innovation by aggregating small pieces of content, all of which have little value on their own, but extraordinary value in aggregate. For a very long time now, big tech has gotten most of its raw materials for free; this is just the next chapter. No wonder they have been such financial juggernauts.

The only viable path would be that they are paid for database inputs, not outputs.


This month I’ll be heading to every state in Australia, Indonesia and Mauritius to do  my new Keynote on AI  –  This was quote from one a did this week:

“I just wanted to say how enlightening and entertaining I found it. Just a great presentation all round…. I’m getting started on my AI journey…”

It’s time for some Sammatron at your next event – Get me in – Don’t miss out!  


Database Wars

The streaming wars might well be replaced with the AI database training wars. And it isn’t just limited to high content creation, as we are seeing in the US writers’ strike. Anywhere, any type of content or information is put into a system and published, an AI is on the other end being trained to replace the people populating that data set. Something as simple as saying ‘yes’ to a phone call being recorded by a service provider now forms part of the AI aggregate. It’s only a matter of time before most industries in Australia, and globally for that matter, arrive at their “AI Writers Strike” moment. And we do need to find a path forward that makes sense for all economic participants, as AI will and should be embraced everywhere it is possible.

Keep thinking,

Steve.

Deep Fake Love & AI Girlfriends

pic above from home page of DreamGF.AI

Listen to Steve read this post (10 min audio) – give it a listen!

A colleague and good friend of mine was the Chief Marketing Officer of the world’s biggest brewer in the 2010s. He had challenges aplenty. The rise of craft beers, fragmentation of brands, the decline in alcohol consumption…. But when I asked him who his biggest competitor was, his answer surprised me: Tinder.

He went on to say that dating apps like Tinder are a very clear substitutes for pubs, clubs, and bars. Many patrons have traditionally gone to these locations to meet people. And while they may not like to admit it, when they enter the club, they look around at the options available and say, “Yes, No, Yes, No, No, No, Yes.” They immediately make a judgment on who they may like to engage with. And this is exactly what happens on Tinder, as people swipe right, left, right, left, left, left, right. You could even say that Dutch courage, or ‘beer goggles,’ have been replaced by on-screen filters that make everyone look a little bit better than in real life. No need to leave the couch to find a potential partner, just sit and swipe in your comfy tracksuit pants.

Digital interactions imitate the real world.

Finding Love Online

Online dating is very big business. We’ve come a long way from the dating videos from the 1980s – a place where mostly awkward people, and mostly men, sat in front of a camera in cable knit jumpers. The market size is over $US10 billion globally in revenue. The entire mating process has changed more radically in the past 20 years than it did in the previous 2000 years. We now expected that people in a relationship met online.


Bonus – Listen to a radio Interview I did with Dwayne Russell to discuss AI’s impact on AFL and other sports


AI Substitutes – Deep Fake Love

While we are all familiar with Deep Fakes being used in politics and business, emerging from the same servers are AI girlfriends. And I’m not just talking about a text based chatbot people can flirt with. Things are now much more real than that.

AI-generated girlfriends now boast continuous AI-generated video, which has an almost no-noticeable difference from a live video chat with an actual person. The voice is hyper-realistic, and the ‘bot’ will have the exact look, identity, behaviour, and personality the end-user has designed for it, all generated by the customer via simple text prompts. An AI girlfriend who is there for her “partner’s” every beck and call, giving him (and it is mostly men), the kind of response he wants, every – single – time. We’ve now entered the era of designer partners – something no plastic surgeon could possibly compete with.

This is not nearly as niche as we may imagine. The list of tech companies providing what they call Virtual Companions is very long indeed: Eva.Ai, PicSo.ai, DreamG.ai, AI girlfriend, Myanima, Intimate – AI Girlfriend, RomanticAI, CoupleAI – Virtual Girlfriend, Replika: My AI Friend, Smart Girl: AI Girlfriend, My Virtual Girlfriend Julie.

Many of these are funded by respected venture capital firms and the estimated number of people with ongoing AI-centric relationships is in the many millions. Subreddits already exist where people discuss their AI companions like they would a football team.

What happens next is easy to imagine. Adding these AI personalities to soft robotics where a fully formed physical likeness of an AI is built and sent to the customer. Just imagine the Sophia Robot, with a Boston Dynamics-like exoskeleton underneath it. When this happens, AI will exit the screen and enter the world just like Hiroshi Ishiguro has imagined.


This month I’ll be heading to every state in Australia, Indonesia and Mauritius to do  my new Keynote on AI  –  This was quote was from my client after yesterday’s keynote:

“It wasn’t just insightful, it was very funny as well – I didn’t expect that”

It’s time for some Sammatron at your next event – Get me in – Don’t miss out!  


The AI Girl on Call

The business model is just as you’d expect – to get started is mostly free. But the more engaged the end user becomes and the more human one makes the AI, and the more the AI interactions which are had, the more the money it costs the user. The companions, of course, send frequent messages to their end user, prompting more interaction and more money.

This is where it gets dangerous, especially give these interactions are so social and intimate. Our DNA is so heavily geared towards relationships that the human mind has a very difficult time delineating between real and online interactions. This is why people develop quasi-kinships with media personalities. In psychology, these are known as para-social relationships. We know the person so well, and in our mind, we believe that they too, know us. It’s also why celebrities are frequently and effectively used in advertising.

We don’t need to be psychologists to imagine the potential behavioural implications:

  • Men designing and controlling women.
  • Young people developing unrealistic expectations of what a healthy relationship is.

One could easily postulate these kind of AI relationships quickly devolving into simulation of violence, and even, paedophilia. It’s a veritable minefield of long-term social issues waiting to happen. All of which the AI companion providers claim they have guardrails against – of course, we’ve heard this before.

Disruptive Implications

The biggest technology disruptions are usually accompanied by simultaneous social changes. Just think about how the web changed the way we live. Likewise, we should start to think seriously about how AI and robotics will change the nature of human relationships. It may well be that dating apps get disrupted by transhuman-AI relationships. You may remember the movie ‘Her’ from a decade ago where the protagonist fell in love with his operating system. Well, we are already there; it’s just happening in dark corners of the internet.

What we ought to be prepared for are changes that are fundamental shifts in humanity. An era of transhumanism. It is foreseeable that cohorts of society will push for transhuman marriages. And if you think that sounds strange, just imagine how gay marriage or transgender people would’ve sounded to anyone a few hundred years ago.

As AIs enter every single realm of society – we’ll see changes that makes the past 30 years seem incidental. Fundamental shifts of human and computer interaction that change the human dynamic, creating a kind of techno-symbiosis – a species-level shift. It’s not just intelligence that’s up for redefinition, but possibly humanity itself.

Keep Thinking,

Steve

Could AI eat itself?

Listen to Steve read this post (9 min audio) – Includes bonus ideas – give it a listen!

Up until November last year, when the first useful ChatBot – ChatGPT – was released to the general public, almost everything on the internet was essentially ‘us’: human-centric content. Articles, posts, podcasts, chat forums, videos, images — all created, posted, described and tagged by people.

This 30-years or so of deep and wide content which populated the internet has been the perfect training ground for Generative AI. All the breadth, nuance and insight of creative humans is what enabled the output to be so ‘human-like’ – even if it comes off a little dry at times. And in less than a year, the internet is starting to morph and change its shape.

Generative AI Ingredients

There are three key ingredients that have made the generative AI revolution we’ve just entered possible:

  1. the AI models, most notably the arrival of Large Language Models, which are the neural networks;
  2. the chips and processing units that can cope with such a large compute across many billions of parameters; and
  3. the data sets that populate the LLMs and get processed.

Of course, without all of it, there can be none of it, but the latter ingredient – the data sets – are vital, given that is what the AI models learn from and use to predict what we want and essentially provide the ‘generative content’.

The AI Internet

Just reading the web in recent months, and we can already see the shift. Social media is increasingly besieged with newly AI-generated content or, content telling you how to create AI-generated content while technology firms and digital media are making cutbacks in staff in a move to create automated content.

Both the demand and supply of AI-generated content are skyrocketing, with the most common job posts in the content realm requiring ‘AI work-withs’ to accelerate output to 100x of what a mere human could produce.

In mere months, the digital landscape has transformed itself. Sites once filled with human insight and opinion are now flooded with AI-generated text, audio, images, and video. Some AIs are even starting to quote and cite each other, creating these echo chambers of misinformation. The internet is going through a hyper-scaled AI industrialization. In a meaningful way, the internet is becoming less human.

While much of this is anecdotal, some research is starting to emerge which demonstrates these changes.


Bonus – Check out my Robot literally 3D printing a building – Visionary Investors wanted!


Experiments in AI

A study emerging out of a collaboration from the University of Oxford, Cambridge, Toronto, and the Imperial College of London found the type of data in the models is all-important. They concluded that if you train an AI system on what they call ‘Synthetic Data’, that is data generated by another AI system, it causes the models to degrade rapidly, and ultimately collapse and fail to function. It may well be that data is a little bit like food. That which is generated naturally by humans, or ‘organically’, is different from the manufactured type.

This is where things get interesting, even a little strange. Given that all LLMs are trained on huge bodies of human text, it seems logical that we’ll need to update that corpus or continue to add human content. And already, that requirement is being compromised by the AI era of the web.

This research is essentially saying that if enough of the internet is output from Generative AI models, then the models will stop working – AI could well eat itself. But we don’t know because most of the training sets are not live and rely on pre-generative AI internet training data sets from 1-2 years ago. Although Google Bard and Microsoft Bing are starting to add live data.

Dead Internet Theory

The Dead Internet Theory is a quasi-conspiracy that has been around for a few years. The general idea is that the internet has largely been taken over by bots – with Statista claiming it is almost 50 per cent of web traffic. Given that generating attention and making money has become so algorithmically driven and a contest for SEO, likes, followers, and fans, a way to win the game is to be releasing bots to generate content and populate your feed or desired political message. Theorists posit that the internet will eventually be a battle of bots against bots, with humans mere bit players.


After doing 600+ keynotes in 40+ countries  my new Keynote on AI  – is my best ever… but don’t take my word for it – This was quote was from yesterday at an AI Seminar I presented at:

“It was Steve Sammartino and daylight – no one came close. He made the entire event worthwhile”

It’s time for some Sammatron at your next event – Get me in – Don’t miss out!  


The Other World Internet

This, and the potential for Generative AI consuming itself, have dramatic implications for the internet. The power of the internet has been derived from human nuance, and insight. If it becomes predictive AI giving average insight from a world of statistical averages The internet could become a kind of other world – where nothing is really ‘human’ – or really ground breaking. If, and it is still a big ‘if’, AI becomes a circular reference tool with degrading data, any advice it provides would just become a loud echo chamber worth avoiding.

An alternative thought is that Generative AI starts to create it’s own emergent behaviour and ideas- thereafter developing deep insights humans would never arrive at.

It’s a live experiment – it will be worth watching this one closely.

Keep Thinking,

Steve.

AI – Why we must Redefine Intelligence

Listen to Steve read this post (7 min audio) – those who listen get extra commentary – I go off piste!

There’s a huge debate going around right now on whether or not Artificial Intelligence is actually ‘intelligent.’ While some say it poses a threat to our species, others argue that it is far less advanced than anyone realizes. Included in the latter cohort is AI expert Rodney Brooks, whom I believe to be among the best in his field. He contends that we’ve been vastly overestimating OpenAI’s large language models, asserting they are much ‘stupider’ than we assume. While that may well be true, it doesn’t matter. Let me explain why…”


Check out my 2 min video on p(Doom) – be sure to comment & send it to a friend.


Brooks asserts that large language model AIs are simply adept at generating what an answer should look like, which is distinct from knowing what the answer should be. When asked if AI is poised to become the sort of artificial general intelligence (AGI) that could operate on an intellectual level similar to humans, he responded:

No, because it doesn’t have any underlying model of the world; it doesn’t have any connection to the world. It is correlation between language.

Intelligence doesn’t need to be ‘Human’

Brooks’s error lies in his analysis from a solely human perspective. His view is that for intelligence to surpass us, it must behave like us. It must possess an internal sense of knowledge, a context of the world. However, even in nature, we observe a considerable number of intelligent species with entirely different contexts from humans. Biological beings go about achieving their objectives and expanding their species using various forms of intelligence. There’s no reason to believe that AI can’t or won’t do the same. More importantly, AI doesn’t need to understand in order to act. I don’t comprehend how my heart beats, how my digestive system functions, or how I can control my arms to catch a ball, yet I can perform these actions. The questions we really need to ask are:

(A) What is intelligence?

and

(B) Can it exist in non-biological forms?


Bonus – Listen to me be Interviewed by Sir Bob Geldof on the risks of AI.


Re-Defining Intelligence

While there are many types of intelligence – abstraction, knowledge, understanding, self-awareness, learning, emotions, reasoning, planning, creativity, critical thinking, and problem-solving – we can more generally define it as follows:

Intelligence: The ability to absorb data, infer information, retain it as knowledge, and then apply it to generate outputs relevant and useful to a particular context.

Through this definition, we could consider plants and ecosystems as intelligent, and clearly, AI. Understanding is not part of the equation. For all we know, this could be a purely human phenomenon, and irrelevant to creating new forms of intelligence outside of the biological realm.

If AI begins to organise information and the physical world independently, and without direction from humans, then what it does or doesn’t know about itself or the world doesn’t matter. If it is acting as an intelligent species, it will impact other species – and become part of the wider ecosystem.

While AI is simply a pattern recognition device, so too are humans. Large Language Models just being able to predict what ‘sounds right’ might actually be enough. Especially when the data input is language – because language is the single mechanism that holds together all forms of human knowledge. Hence, they do not need to have a direct connection to the world – they instead have one which lives inside a different context – a computational one. And it is therefore foreseeable that these models will be able to predict and see patterns we cannot, and make decisions in their context.


This is some feedback I got from a client on my  my new Keynote on AI  – 

“It was best keynote I have seen in 20 years – it was that insightful.”

It’s a revolution – Get me in – Don’t miss out!  


If you take anything away from this post – let it be this:

Don’t let yesterday’s definitions of anything dictate what your business, your future, or our world might look like.

(Oh, and have the courage to disagree with others in high places if you’ve done thorough study via reputable sources.)

Keep Thinking,

Steve.

Elections in the AI era

Listen to Steve read this post (7 min audio) – let me know if you listened instead of reading this!

“Personally I think the idea that fake news on Facebook, influenced the election in any way — is a pretty crazy idea.” – Mark Zuckerberg, 2016.

The echoes of Zuckerberg’s statement back in 2016 resonate loudly today.

What may sound less crazy now is this: The 2024 US election cycle could possibly be the first authentic AI election. Donald Trump and Gov. Ron DeSantis have already used fake AI images of each other and we are still 16 months out from the vote.

This is an election where the capabilities of Generative AI will quite simply have a massive influence. We understand its potential, we anticipate its use, yet governments appear largely indifferent to its probable role in election campaigns. The potential repercussions on geopolitics and the global economy are staggering.

It’s almost a certainty that the forthcoming election will transcend the era of Facebook ads. We are venturing into a realm where an underlying uncertainty will accompany everything we witness. Our world has advanced past glitchy fakes into a phase that we term in AI as – ‘No Noticeable Difference’.

Democratization of Disinformation

Unless we personally attended an event, we may forever question its authenticity. Considering this evolving reality, along with the apparent legislative apathy, it’s crucial that we understand the redefined landscape of politics in the AI era.

Historically, election interference has been a costly and challenging endeavour, typically orchestrated by rogue states like Russia, China, and North Korea. And now, anyone with a laptop and personal election agenda can fabricate anything they choose.


This week Sir Bob Geldof interviewed me about AI.

So if you get me in to do my new Keynote on AI  – you will be in very good company indeed.

He was astounded at what’s in store – Don’t miss out!


The Perfect Market

Generative AI thrives on the data set it’s trained on. In this context, politics is exceptionally vulnerable. The public life led by candidates offers a treasure trove of data, with endless recordings across media platforms providing rich training material. Combine this with the readily available Generative AI tools capable of generating near-perfect duplicates of voice, video and images, and the populace at large possesses the means to create what could be accepted as ‘real’ by an unsuspecting voter.

The true democratization of a technology is signified when it becomes integral to the political process. We’ve witnessed this evolution with print, radio, TV, and social media. The next stage in this progression is the advent of Generative AI.

Electoral manipulation is set to intensify exponentially. Digital falsifications will be of higher quality. Microtargeting will be significantly more potent, possibly reaching the granularity of individuals.

Political advertising can graduate from being hastily cobbled together talk pieces, to cinematic-quality productions potentially painting dystopian visions of the opposition in power, thus inducing irrational fear. Those who once lacked resources can now participate on par with nation states.

However, it’s the subtler uses and their societal implications we ought focus on.

Subtle Social Implications

For conscientious voters, we used to be able spot a fabricated picture, video, or statement. Especially if the political event or speech never transpired. Mainstream media generally does a commendable job at fact-checking in this regard.

But, if present-day AI is used to redub a speech, subtly modifying a few words or a sentence to distort a candidate’s message, with impeccably mimicked facial movements, the game changes entirely. Consider this happening after a presidential debate, with slightly manipulated footage making its way into the multitude of YouTube highlights reels. Discerning the truth from the fiction could become difficult indeed.

Once fake information permeates the political landscape, we risk descending into an era where cynicism usurps belief in everything we see. In such an environment, a candidate could deny anything claiming it is an AI-generated fiction. This would especially be the case with controversial or scandalous footage, they never wanted the public to see.

Post Media Reality

A crazy idea I have, is that we may enter a new post-media era. If the only things we know truly occurred, are those we witnessed with our own eyes, it would be akin to living again, in a world without newsprint, radio, Tv or the internet. When everything might not be real there’s a very good chance we won’t pay attention to any of it. And if that happens, the attention economy could end up eating itself out of existence.

—-

Keep Thinking,

Steve.

AI – Risk of Extinction

Listen to Steve read this post (8 min audio) – let me know if you listened instead of reading this!

Imagine for a minute you were given an investment opportunity which could be life-changing, enough to make you better off than you are now, maybe even seriously rich. But the price of entry was that you had to put every single asset you owned into it and there was a small chance it would go to absolute zero — even a 5-10 per cent probability that this would occur.

Would you do it?

I wouldn’t consider it, even for a second. Yet, these are the risks we are being asked to take with our live Generative AI experiment, at least according to some AI experts.

p(Doom)

If you think P(Doom) sounds worrying, trust your instincts. P(Doom) is the term AI researchers are using to describe the probability that Artificial Super Intelligence will emerge and become an existential risk for humanity. (What a wonderful little moniker.)

Here is the crazy thing. Many learned AI researchers now have this probability sitting as high as 20-50 per cent. Kinda scary. Even a 5 per cent chance to wipe out our species is a worry… and there is almost no one who has a ratio smaller than this. Sam Altman, the CEO of Open AI, who created ChatGPT, has publicly said the risk is real and he has a P(Doom) at around 5 per cent.

It’s at this point that we must remember that we are not talking about something bad happening here, like, say, a recession, or a war, a hurricane, or even a pandemic. All of these we’ve faced before and, with a lot of pain, have overcome, collectively. We are talking about the end of humanity – it doesn’t get any heavier than that.

  • Reply to this email and tell me what if AI worries you.

Says Who?

Some of those with a P(Doom) at worryingly high levels are not fear-mongering crackpots, but those deeply involved in giving birth to AI. Here are some of the worried AI researchers and their P(Doom) percentages.

  • Michael Tontchev, a former Microsoft software developer and current senior software lead at Meta has his at 20 per cent.
  • Paul Christiano, a former OpenAI researcher who also holds a Ph.D. in theoretical computer science from UC Berkeley has his at 50 per cent.
  • Eliezer Yudkowsky, a renowned AI researcher and decision theorist has his at 50 per cent.
  • Geoffrey Hinton, known as the godfather of AI and a recent Google employee has his at 50 per cent.

Cold War 2.0 & AI Bunkers

As a keynote speaker on the future and AI, the most common question i get asked is if we will face a ‘Terminator Moment’. And just like at the height of the Cold War, those with their fingers on the button seem to be the only ones with a bunker they can run to if things go wrong.

In 2016, Altman said in an interview he was prepping for survival in the event of catastrophe such as a rogue AI, claiming to have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defence Force, and a big patch of land in Big Sur that he can fly to.

Altman’s doomsday vision of AI gone wrong is not uncommon in Silicon Valley. No tech billionaire worth his salt doesn’t have a post-apocalyptic contingency plan and remote bunker. Both Peter Thiel and Google co-founder Larry Page have snapped up land in New Zealand and built bunkers. They literally have private jets filled with fuel – which they don’t use – and pilots paid to wait for this ‘nuclear moment’.


This is some feedback I got from a client on my  my new Keynote on AI  – 

“Steve was amazing, awesome and all the superlatives. His insights on AI were absolutely incredible as was the feedback from our customers.”

It’s a revolution – Get me in – Don’t miss out!  


The AI Saviour?

Readers will know that I’m typically positive when it comes to the emancipating power of technology. And the last thing I want to be accused of is fear-mongering. There is a counter-argument to the worries about the AI threat:

We may not be able to survive without it. Really.

It seems to me that the probability of our species surviving other existential risks is greater than most experts’ AI P(Doom). The nuclear threat is still very real, and possibly greater than it ever was during the Cold War. While we theoretically control it, we can only count ourselves lucky that a crazed suicide bomber or rogue terrorist group hasn’t secured and deployed nuclear weapons.

Likewise, despite our progress with renewable energy, I can’t see progress by any large nation-state, which gives me the confidence to believe we can reduce our global emissions to the level needed before we reach a point of no return. We are still largely addicted to fossil fuels, GDP, economic growth, and consumption.

Maybe the thing we actually need is an all-omniscient, benevolent AI to save us from ourselves!

An AI which can uncover new forms of yet-to-be-discovered highly available energy, or ways to ensure we circumvent nuclear disaster via ‘Artificial Diplomacy’, an AI which can help us navigate species-level dangers which are already clear and present.

Keep Thinking,

Steve.

Banning AI in Schools

Listen to Steve read this post (6 min audio) – let me know if you listened instead of reading this!

With Generative AI, we now all possess virtual PhDs in every subject. Much of the intellectual labor we once performed can now be done for us, and the results are often better than what most humans could produce. This prompts the question of what education’s role should be.

How to Cheat!

Chat GPT is, of course, at the forefront. While many schools and universities have been attempting to identify or ban “ChatBot cheating,” one of the more commendable approaches I’ve observed was taken by the University of Sydney.

First-year medical students enrolled in the course “Contemporary Medical Challenges” have incorporated ChatGPT into their curriculum.

Students were given a task: to formulate a question on a modern medical challenge of their choosing, prompt ChatGPT to “write” an essay on the topic, and meticulously review and edit the AI’s output. They were required to complete at least four drafts and reviews, edit and re-prompt the AI, and then refine it into a submission-worthy final draft.

The main criterion for success was being able to manipulate the questions for ChatGPT to not only produce an optimal essay but also to observe the process and the thinking they went through while editing the essay and how they re-prompted the AI to delve into the appropriate arenas of knowledge.

“We want to make sure the grads are not just getting ChatGPT to do their work, we want them to have discerning judgment, and a curiosity about the future,” course coordinator Martin Brown said.

“You have to work with it. You can’t ban it – it would be crazy.”

This is truly an enlightened approach.

It’s clear that there are different types of knowledge. We have basic memorization, reproducing information, and collating information – tasks that educational institutions have traded in for centuries. But when AIs like ChatGPT can perform those tasks for anyone, for free, it’s time to reevaluate education.

It might seem like an odd thing to say, but the reason we don’t evaluate students on their ability to lift heavy objects is that machines were invented before the modern K-12 school system was. Even when we do physical education at senior school, it primarily becomes about human energy systems and biomechanics.

This is some feedback I got from a client on my  my new Keynote on AI  – 

“Steve was amazing, awesome and all the superlatives. His insights on AI were absolutely incredible as was the feedback from our customers.”

It’s a revolution – Get me in – Don’t miss out!  

Age Gates and AI

While we’ve had calculators and spell-checking computation for a very long time, no one would argue that those who know how to add, spell, and write have a distinct economic advantage in life. There is a reason we only introduce calculators and computers into education once kids know how to read and write – we need to be able to judge the output.

We need to know what good looks like, even when a tool can elevate what would usually be good into something great. We’ll need to exercise caution when introducing AI in the early years of education and be deliberate about incorporating it at senior and tertiary levels.

AI won’t eliminate the need for deep domain knowledge; in fact, it may intensify it. Those who know more will be able to derive more from the generative AI systems at our disposal. They’ll know what to ask it, how to obtain better revisions, and, most importantly, discern whether what it has generated is acceptable. In some ways, AI will transform much of what we do into curating and conducting possibilities. And, just like in the art world, this requires judgment and even taste.


Keep Thinking,

Steve.