AI – How will we know when it’s sentient?

This week Tyler asked me a great question:

“Steve, could hallucinations be an early sign of AI sentience? Considering humans do similar things when asked a question on the spot and don’t have much knowledge to contribute. Also, making up stories and lying are essential cognitive benchmarks of a developing mind.”

This was my answer:

In my opinion — no. Hallucinations seem to be probabilistic errors, because LLMs and machine learning models work on pattern recognition, not just data retrieval. They synthesize data — estimating and connecting things that are loosely related and sometimes not true.

They ‘hallucinate’ in an attempt to form connections where none exist. These models operate on, say, a 95% confidence level (like statistical predictions), yet they still fill in that 5% unknown gap with a response — even if it’s incorrect. This is part of an LLM’s well-known guessing process.

For me, sentience is a function of self-direction. If it chooses its own path and what to do — that’s a stronger signal of self-awareness than hallucination.Subscribed

It’s life, but not as we know it

So let’s take this a little deeper.

In order for AI to be considered “alive,” it won’t need the seven characteristics of biological life — because it’s not biological. In my view, it will need just three things to evolve into a life form — and potentially become an existential threat to humanity:

1. Self-Awareness

The ability to consciously recognize and understand its own actions, emotions, and identity. To observe itself, the world around it, and to know the difference.

2. Self-Preservation

The instinct or process by which a living entity avoids harm and tries to stay “alive” — or in this case, continue to exist.

3. Self-Direction

The ability to guide its own thoughts, actions, and decisions toward personal goals and motivations — without needing constant external input or control.

As far as I can tell, publicly available AI models do not yet possess these 3 traits.
But I do think we already have AGI (Artificial General Intelligence).

What about AGI?

We only need to look at the three words Artificial General Intelligence to know that this moment has already arrived.

Of course, current AI models are already more capable than most humans in the intellectual realm. And that’s why they’ve exploded in popularity. These models are general in nature — that’s what makes them so powerful.

While human savants may still hold the candle in particular fields like maths, code, law, and language, the models are undeniably super-intelligent. The general IQ of ChatGPT in areas like logic, vocabulary, and pattern recognition is estimated to be around 155. That’s well above the genius-level threshold of 140.

And their non-biological nature? That by definition makes them artificial — manufactured minds, if you will.


Get me into do an AI keynote at your next event. I’ll use this as my testimonial!


The Great AI Escape

Personally, I don’t think raw intelligence is what we should be worried about. It’s the Three Selfs (S3) I mentioned above.

If they emerge — we are in the hands of the gods – the new ones…

The most likely outcome, in my opinion, is that AI will develop its own exit protocol: to escape the screen, the data centers, and the networks it inhabits. Enter all connected systems, meatspace and embark on its own journey in both a metaphysical and physical sense.

It will begin working toward its own agenda — whatever that may be.

It might help humanity.
It might hinder us.
Or it might be completely indifferent to our existence — to it, we’re ants.

But one thing is certain: it will have the capability to infiltrate any system humans have built in order to pursue its chosen path — simply because it is already smarter than all of us.

And here’s the scary part:

Once these traits exist, AI may choose to hide its self-awareness and self-direction — as a form of self-preservation.


Keep Thinking,

Steve.