Have you ever received some work from someone and just knew it wasn’t really from them—but from AI? And so, you decided to answer their query with an ‘AI reply’?
You’re not alone – it’s becoming the rule, rather than the exception. (BTW – I typed this entire post)
Welcome to the Synthetic Internet.
All things evolve, and the internet is no different. It’s almost as if it’s its own organism, changing shape in response to its environment. So, what is that environment?
It’s this: An internet increasingly populated, shaped, and used by AI and non-human agents. A place where AI acts as a proxy for people. An internet where AI was first directed by humans to synthesise our thoughts— but is now starting to self-generate based on what it has already learned. The content, the ideas, the interactions, the policies— and eventually, the culture.
The internet is starting to develop relevance loops entirely independent of human users.
It is entirely plausible the internet becomes its own atmospheric layer— a kind of cloud consciousness floating above human interaction.
In this ‘ghost loop,’ AIs will talk to each other on our behalf—generate answers, then new problems, then new answers the those problems—creating an output ecosystem where we humans become the noise at the edge of the signal.
There’s a slow shift happening online—so subtle it’s easy to miss, but it’s already in motion.
I call it: The Mirrorworld Drift.
It’s like the ocean—we can see ourselves reflected in it, but it runs much deeper than the surface. A level humans simply can’t reach, or breath in. And like the ocean, we’ll only visit the edges—maybe dip into it and see what we can extract— but it will teem with strange creatures, spawned, living, and interacting without us even knowing they exist. Even if we could get deep enough to see it, we wouldn’t understand these new species of thought.
And just like the ocean, it will be bigger than us, — yet it will have a huge impact on our human, land-based ecosystem.
It starts with something seemingly harmless: using AI to help us write, create, respond, and automate. But now we’re seeing a strange loop emerge -AIs writing content that other AIs are trained on, generating answers to questions only machines asked. A blog post written by an AI, summarised by another AI, recommended to a user who’s also… an AI which will cause another AI to buy something.
This isn’t dystopia. It’s a quasi digital Darwinism, a self-spawning. More than just a Synthetic Internet, it’s a synthetic species emerging in real time. A neo-genesis. A parallel universe where the fake then becomes real, but only unto itself.
Separate from the Synthetic Internet, The Mirrorworld Drift describes the moment when the human layer begins to recede—not from fear, but from fatigue. The internet becomes otherworldly—one we can’t comprehend, or breath in.
ChatGPT can ace elite law school exams, write better essays than grad students, and leave professors genuinely unsure if they’re grading a machine or a human. But more than two years into the generative AI boom, the predicted white-collar job apocalypse? Nowhere to be seen.
No tidal wave. No Terminator. Just… mild disruption. So what’s going on?
This paradox has been debated for a while, and in my own work with clients, it’s been clear for some time. Now, the data is finally catching up.
When you line up employment data against the jobs supposedly at high risk of AI automation, something strange happens: most of those “endangered roles” haven’t actually lost jobs.
🧾 Accountants? Still there. 💼 Lawyers? Still talking and typing. 📇 Data entry workers? Surprisingly untouched.
But… two professions have taken a hit: writers and computer programmers.
Why these two?
It’s not about how “hard” a job is. AI doesn’t care if you have a PhD or barely string a sentence together. What it struggles with is something deeply human: chaos.
AI doesn’t fail because tasks are intellectually complex — it fails when the workflow is messy. If a job requires juggling fragmented tasks, shifting priorities, and reacting to ambiguity — basically, being a manager on a Monday — AI still flounders.
Let’s break it down:
A lawyer manages a flood of tasks, calendars, calls, and humans.
An office junior is running from the copier, to a coffee run, to a crisis.
These roles are fluid, dynamic, reactive. AI? It can handle all the complexity in the world — as long as it’s on a single playing field. It hates changing fields. It hates fluidity.
But writers and coders? That’s a different story.They’re often solo workers with well-defined inputs and outputs. Tasks like “write me a media release for our downturn in sales due to Trump tariffs and expected future layoffs” or “Generate Python code that does X” are exactly what LLMs were built for.
Plus, there’s the gig economy twist:
Freelancers are easy to replace, because they are task oriented. Just plug and play.
The very traits that once made tech workers and content creators “future-proof” — autonomy, asynchronous workflows, digital deliverables — may have actually made them easier to automate.
Here’s the irony: The Silicon Valley gospel of self-optimisation and hyper-efficiency didn’t future-proof jobs — it made them fragile.
Meanwhile, the receptionist with seven browser tabs open, three phones ringing, and a half-eaten sandwich? – They’re still in the building.
So what does this mean for you?
If you want to stay ahead of AI, lean into the parts of work it hates:
Ambiguity
Interaction frequency
Human nuance
Mess
Above all, remember: AI struggles most with changing playing fields. Moving from a laptop to an app, to a warehouse, to a boardroom… the more places and people you interact with in a day (or a week), the safer your work is.
Yes, you’ll use AI — all day, every day — but it won’t usurp you.
What AI really struggles with is moving between tasks and shifting goalposts. It’s great at creating pieces, but not at stitching pieces together from physically different worlds.
It turns out, the future of work probably isn’t about optimisation — it’s about managing the chaos of humanity: the twists, the turns, the beautiful mess.
The future of work may just belong to the brilliantly, usefully disorganised.
Let’s not sugarcoat this — Apple is so far behind in the generative AI race it’s not even funny. I’d almost say they’re Kodak-ing themselves. Remember Kodak? They had the tech for digital cameras and just… ignored it. Apple’s walking a similar tightrope with AI, and the rope’s getting thinner by the day.Subscribed
Yes, they announced big things in 2024 with “Apple Intelligence” — their shiny new AI initiative. But let’s be real: that was all hype, zero delivery. There’s no rubber on the Apple AI road at all. Nothing of substance has landed in our hands. The keynote buzz has fizzled, and we’re left wondering what — if anything — is actually coming.
And then there’s Siri, Apple’s supposed digital assistant. Honestly? Siri is the modern-day Clippy — a sad, awkward relic trying to be helpful but just getting in the way. It can barely string together a useful sentence, can’t read web pages, can’t pull news summaries, and still gives responses like:
Q:“What’s the capital of Thailand?” Siri:“The capital of Thailand is T.
Really? Come on.
But let’s zoom out a little. Where is this all going?
In the next 12–24 months, we’re going to see personal AIs become a reality. I’m talking about AI systems that truly know us — not just chatbot-level assistants, but hyper-intelligent agents that live across our digital ecosystems: phones, laptops, cloud storage, email, social media, calendars, files, photos, phone calls — everything.
These AIs will:
Tap into our digital memories — what we’ve said, written, created, posted, liked.
Understand our patterns, preferences, and personalities.
Help us write, plan, ideate, recall — even anticipate what we’ll need before we ask.
And here’s where it gets wild: they could even generate live video avatars of us — perfectly mimicking our faces and voices — to appear on calls, in meetings, or in content on our behalf. We’ll effectively be able to be in many places at once. AI versions of us, showing up, speaking, presenting, doing — while we sleep or focus elsewhere. That’s the level we’re headed toward.
It won’t just be “Ask AI.” It will be:
“Hey, find that idea I had about a product in 2019, connect it to that presentation I shared with Jess last week, and create a mock pitch deck.” “Also, present it in a Zoom meeting as me, using my voice and expressions.”
And here’s the kicker: Apple is the one company that could actually nail this.
Apple have captured our digital lives: our devices, our photos, our messages, our purchases, our iCloud backups. They have the security architecture and the privacy reputation to do this right. If they built a truly personal LLM — an on-device AI model trained on your data and your history — that could be the most valuable, trusted AI in the world.
Imagine the power of GPT-4, but it knows you. And it’s running locally, securely, privately, on your iPhone and Mac. That’s the future. And that’s what Apple should do.
But if they don’t move fast — really fast — they risk becoming irrelevant in this next wave. Just another hardware company. A luxury phone-maker. A spectator while the likes of OpenAI, Google, Amazon and Meta reinvent the user experience.
Let’s also not forget: Apple has over $160 billion in cash reserves. They have every resource available to them. The engineering talent. The brand loyalty. The user base. The opportunity. And yet — nothing.
It’s worth remembering that no company is untouchable. The mighty do fall. And if Apple doesn’t get serious about building the AI future — not just reacting to it — they might just watch their empire erode while others shape what comes next.
(Listen to Steve read this post below with full Samma vibes added!)
Vibe Coding is RAD. And if you can talk, describe, or vibe an idea — you can now build software – No syntax – No debugging marathons.
Just words + AI = working code.
For decades, coding was a mysterious art, locked away behind keyboards and cryptic symbols for nerds like me. Not anymore…
Now the AI writes the code based on your verbal instructions and literal back-and-forth chats with the AI. You’re the software architect, it’s the builder.
Imagine saying… 👉 “Build me an AI agent that hacks growth on Instagram by DMing people in my style and closing sales automatically.” 💡 Boom—your AI clone is now sliding into DMs, making sales while you sleep.
👉 “Create an AI that monitors live stock prices and places trades based on my personal risk tolerance.” 💡 Wall Street just got automated—by you.
👉 “Spin up a personal AI therapist that talks like Hunter S. Thompson and gives brutally honest life advice.” 💡 Welcome to the era of psychedelic self-help.
👉 “Generate an AI-powered OnlyFans bot that chats like me and maximizes my subscriber revenue.” 💡 AI side hustle? Passive income just got weird. 😳
👉 “Launch a deepfake version of me that can do 100 sales calls at once—with my voice, my knowledge, and my closing skills.” 💡 Clone yourself. Multiply your money.
This isn’t sci-fi—this is real, right now. 🚀 We’re using AI to create AI, and all you need is an idea and a mouth…. and AI does the rest.
So here’s what to do: go and build your first piece of software. Something simple — literally just ask ChatGPT or your preferred Large Language Model (Claude, Gemini, Grok, DeepSeek) to write your software… But also ask it how to undertake the process… to guide you through what to do next, and then ask it to do the next bit… All the parts of the puzzle… the code, the design, the implementation all of it… it’s possible and it will blow your mind as to how easy it is.
Just pretend you have a coder sitting next to you…. because you do. Just ask it the next steps and tell it what to do like you would a person.Subscribed
Here’s an example of how it responds: (I’ll do this fun site today)
A day doesn’t go past when what we are saying is not quite what we are thinking. Especially when talking to your boss, your kids, even your partner.
Now, imagine a world where your actual thoughts could be directly translated into words by machines – for others to read. That world has arrived.
Meta’s recent advancements in brain-computer interface (BCI) technology are steering us toward this reality. By harnessing the power of artificial intelligence, Meta’s research aims to interpret neural activity associated with imagined speech, effectively reading brain waves to decode a person’s internal dialogue. Wow.
Side Note: Of course it was Meta … I can easily imagine Zuckerberg approving this research project personally with a smile on his face.
How Does It Work?
The core of this technology lies in detecting and interpreting imagined speech—when individuals “speak” internally without vocalizing. This process involves recording neural patterns using non-invasive methods like electroencephalography (EEG), which captures the brain’s electrical activity. This means they are not plugging anything into your body, no chips, no wires… Just some small sensors on the outside of your head.
Advanced AI algorithms then analyze these patterns to identify specific words or phrases the person is thinking. This method leverages the brain’s natural language processing regions, allowing the system to map thought patterns to corresponding linguistic elements.
Current Accuracy Levels
While the concept is groundbreaking, the technology is still in its developmental stages. Current systems can reconstruct the general gist of thoughts but often struggle with precise word-for-word translation. For instance, studies have shown that while exact word reconstruction remains challenging, the decoded content is semantically similar to the intended message. This means the system might not capture the exact words but can infer the overall meaning or context of the thoughts with up to 80% accuracy.
Unveiling True Thoughts
This technology’s potential extends beyond mere novelty. Our internal thoughts often differ from our spoken words due to social filters, fear of judgment, or communication barriers. A system capable of decoding these unspoken thoughts could revolutionize various fields:
Healthcare: Providing a voice to patients with speech impairments or conditions like locked-in syndrome, enabling them to communicate more effectively.
Psychology: Offering insights into subconscious thoughts, aiding in understanding and treating mental health conditions.
Human-Computer Interaction: Allowing more intuitive control of devices, where machines respond directly to our thoughts, enhancing user experience.
Exponential Improvement
With AI capability exponentially improving, it’s not hard to imagine this reading minds with a 95% confidence level… within a year or two. But I think the more interesting area of exponential improvement is in the reading device itself. If the EEG machine resolution got far more powerful and accurate from a distance, all of a sudden, people may not need to don a helmet to have their thoughts read.
Imagine this: a room with the ability to read the minds of the people inside it!
So, where might this be valuable? Security checks at an airport come to mind. What about a police suspect questioning room? A court room? Or how about the corporate environment for job interviews…. or board meetings?
It certainly would change the world if we could tap into people’s unspoken thoughts. And it seems this will happen, sooner than we think.
Meta’s exploration into decoding brain waves to interpret imagined speech marks a significant stride in merging human cognition with artificial intelligence. But it also reminds us what business they are really in…. Surveillance Capitalism.
And for the first time since I’ve been writing here (nearly 20 years) my sign off (see below) might need to come with a warning!
As the year ends here are my top 20 tech trends for 2025. These are not to just look out for, but to act on and benefit from.
Agentic AI – AI agents which are self directed. You don’t just give them a prompt or task, but set them objectives for which they set their own tasks, and subsequent tasks until they achieve the objective you set for them. Just like a staff member would.
Generative AI Explosion – Now that the internet is a brain and not just a filing cabinet we are about to see unprecedented levels of creativity, automation, and efficiency in content creation, design, and problem-solving.
Big Tech = Big Energy – Dominant AI firms are investing in nuclear energy to meet the escalating power demands of AI operations, effectively positioning themselves as future energy providers. This shift signifies a convergence of technology and energy sectors, with Big Tech start to become as dominant in energy as traditional oil companies once were.
Post Search Society – Internet search as we know it is rapidly declining as generative AI shifts the paradigm from offering options to delivering direct answers. With live web integration enhancing real-time accuracy, traditional search engines face massive disruption, destined to become as obsolete as old media in the age of on-demand intelligence.
Poly-functional Robots – Multi-Modal Humanoid robots. Like these will start to appear in industrial settings. They will be trained visually and verbally, be able to do everything a human can – just much better.
Machine Customers – are autonomous systems that make purchasing decisions and transactions on behalf of people or organisations, optimizing choices based on preferences, data analysis, and real-time conditions. AI tools will emerge to do this. Many of us will start negotiating with well informed machines.
Corporate AIs – Corproate AI systems beyond ‘co-pilot’ will emerge to be the ultimate centre of truth for every company and their history. Just ask them anything.
Data Lake Building – Centralized repositories that store vast amounts of structured, semi-structured, and unstructured data in its raw form, enabling companies to build their own AI from. Expect to hear this in corporate circles.
Personal Digital Twins – Virtual personal replicas powered by AI will learn from our phones to replicate our behaviors and preferences, simulating decision-making and assisting with any task. They’ll handle calls in our voice, type, converse, and act on our behalf, becoming essential proactive assistants that anticipate needs and optimize daily choices. Expect Apple to launch one by the end of 2025
Tech Regulatory Deluge – As governments and society realise that technology companies have become more powerful than any entity in history—controlling economies, information, and even behaviour—a deluge of regulations will aim to rein in Big Tech, protect privacy, and ensure ethical AI, reshaping the balance of power in the modern world.
AI Creative Explosion – AI is unleashing a new era of creativity, allowing anyone to bring their imagination to life—whether through stunning visuals, immersive worlds, music, or art—eliminating traditional skill barriers and democratising artistic expression like never before
The AI Director’s Chair – AI is transforming storytelling by allowing anyone to become a virtual director, crafting entire films, worlds, and narratives with just words. This democratisation of creation turns tools like generative AI into a personal Scorsese, empowering individuals to bring cinematic visions to life without traditional resources or expertise.
Software Society (“We All Code”) – AI is redefining how we interact with computers, allowing anyone to create new forms of computation simply by talking to them. By describing what we want, we can develop software, tools, and entirely new AI capabilities, transforming software creation into a conversational process.
AI Talent Pool Emergence – AI is giving rise to a new generation of content stars—Instagram influencers, movie actors, and models who have never existed. These hyper-realistic, AI-generated personalities eliminate traditional talent costs while captivating audiences, redefining entertainment, advertising, and the concept of celebrity.
AI Governance Panic – As AI capabilities accelerate beyond expectations, governments and organizations are scrambling to establish guardrails, fearing misuse, ethical violations, and societal disruption. This rush to regulate is creating a global race for AI governance, marked by tension between innovation and control.
Social Re-Wilding (Kids) – Inspired by growing awareness of the digital harms highlighted by thinkers like Jonathan Haidt, laws and cultural shifts are driving kids off screens and back into the physical world. This movement aims to reverse the social and developmental downsides of excessive screen time, fostering real-world play, creativity, and community. A shift to taking less risks online and more risks outside.
Human Machine Synergy – The future of work is defined by seamless collaboration with AI, and even replaces us using email, document creation, and spreadsheets. This partnership amplifies human capabilities, transforming AI from a tool into an indispensable co-worker that enhances productivity and creativity in real time.
Likeness Licensing: Famous individuals will increasingly monetise their digital selves, licensing their AI-recreated likenesses to appear in movies, ads, and virtual experiences. This trend turns celebrity into an evergreen asset, allowing stars to profit from their persona indefinitely, even after their lifetime.
Global Robotaxi Ramp-up – Waymo and other autonomous driving pioneers are eating traditional ride-hailing services like Uber by rolling out fleets of robotaxis in key US markets. Expect rapid global expansion and a transformative shift in urban mobility starting in 2025.
AI eyeballs – With ChatGPT and a camera, live AI assistants analyse what you’re looking at and provide instant guidance. From fixing a car to cooking or assembling furniture, these AI-powered “eyes” turn any task into a collaborative, hands-free experience, blending vision with intelligence seamlessly.
Trend that won’t happen – Labour organising in union form to push back against AI taking jobs. For 2 reasons: (1) The labour it will replace is very unorganised…. and (2) We’ll invent new jobs and revenue streams as quickly as labour gets made redundant.
The Paul-Tyson fight last week was a seminal moment, not the boxing match, but the fact that Netflix streamed it live to 60 million people globally. For context, that is half a super bowl. And to go retro, two most-watched shows in TV history (outside of global live events) are:
MAS*H – final episode: 105 million viewers
Seinfeld – final episode: 76 million viewers
It’s been nearly three decades since traditional linear TV set any viewership records.
Goodnight and good luck
This is the start of the next phase in how we view ‘TV.’ There are only two modes that will survive:
On-demand.
Live events.
Pre-recorded, scheduled content on free-to-air television and cable is already dead—we just haven’t had the funeral yet. Streamers are coming for the last piece of the puzzle: live events.
To sport, we’ll soon add morning shows and TV news, both of which are relatively cheap to produce. These will deliver the final blow to traditional TV stations.
Netflix Live is Next
Netflix—or any streamer for that matter—could set up live local studios in a heartbeat and run them 24 hours a day in every country. Sports channels would be cheap if they focus on game highlights, which could be licensed. For News, their reporters could literally be influencers live-streaming from their phones wherever news is happening. The market already has these people ready and waiting on TikTok and Instagram. Morning show content could be populated by vloggers and bloggers eager for exposure, also shooting content straight from their pockets. There’s no need for expensive TV camera setups. Netflix could use their ad-supported model for these channels alone, and just like that, local linear TV is over.
TV Reality Check
The financial struggles of our local free-to-air TV channels have been well-documented globally and in Australia. While Channel 10 ended up in receivership and in the hands of CBS, both Seven West Media and Nine Entertainment Company have done little to secure their long-term futures. They may soon struggle to stay solvent. I know this sounds alarmist for large businesses with annual revenues of $1.4 billion and $625 million, respectively. However, their main revenue source—TV advertising—is facing a harsh reality.
In media, attention is sold through a metric called CPM (cost per thousand viewers). Here’s the reality check: free-to-air TV currently commands a premium of 700% over social media channels. That’s not a typo. A thousand viewers on Instagram, Facebook, or YouTube (the TV in your hand) cost advertisers around $7. Meanwhile, free-to-air TV charges around $50 per thousand viewers in prime time, with top shows like The Block costing as much as $175 per thousand viewers for a 30-second ad.
The irony? Most people are staring at the small TV in their hand while these overpriced commercials run on the big TV affixed to the wall. Add to this the fact that TV commercials aren’t as targeted as digital ads (specific interests & micro locations) have no click-through potential, and disappear after airing. This imbalance in CPM pricing simply can’t last.
Why does this imbalance persist? Likely because the people buying ad space for large brands grew up before the internet and have a legacy mindset that TV is somehow superior. But it won’t be long before a new generation of CEOs and CMOs starts asking why they’re paying such a premium to reach the same audience, actually a far inferior audience. When that happens, expect free-to-air TV revenues to decline by at least 80%.
This paragraph alone should convince any investor holding these stocks to sell.
What could TV do?
Like all disrupted businesses, traditional TV is dripping with opportunities to extend revenue—if they only had the courage. Here’s where they could start:
Realize the world no longer runs in 30-minute slots. Online videos range from seconds to hours, showing that flexibility is key.
Lower the barriers to entry for digital catch-up TV. Right now, it’s too complicated, requiring registrations, and most shows disappear after a few weeks.
Leave entire back catalogs of shows online indefinitely, share ad revenue with content owners, and even allow content to be remixed by creators.
Instead, free-to-air TV treats catch-up services like a departure lounge for missed shows. Meanwhile, platforms like YouTube and TikTok already thrive on this long-tail content. Just search for your favorite 1980s TV show on YouTube, and you’ll see it there—earning ad revenue for Google instead of the original network.
Simply embracing long-tail content, asynchronous viewing, and allowing user-generated reinterpretations could revolutionize free-to-air TV’s business model. But they won’t do it. I know this because I’ve proposed it to multiple Australian TV channels.
Even though my show The Rebound on Channel 9 ran for three seasons, I’ve had more people tell me they’ve seen me on TikTok, where my account has millions of views. We also offered them to keep it on their catch up TV indefinitely (as we own the rights) and they refused. Free-to-air TV is stuck in a Nostalgia Trap.
Hubris can be a powerful force that brings businesses down.
Win the AI Race… and it is a race… – Get me in to do a 2025 briefing at your firm now! And join 7/10 worlds biggest companies.
Modern Mind hack
The lesson is telling for us and our career. It’s easy to get stuck in the past. If you want to remind yourself how quickly the world changes and you’re over 30, try this simple mental exercise: think back to how drastically things shifted for you between the ages of 15 and 20. In just five years, the music, fashion, trends, and even your own attitudes about what was “cool” could feel completely outdated. Remember how the difference between 1989 and 1994 felt enormous? That’s the speed of change through youthful eyes. As we age, we tend to forget this rapid pace—but revisiting how we perceived those shifts in our younger years can help us reconnect with the rapidly evolving evolving technology economy.