AI keeps dazzling us with breakthroughs, but the longer I use these tools every single day – across clients, industries, and my own work – the clearer it becomes: AI may be advancing at a blistering pace, but it’s still hilariously, stubbornly dumb in some very human-inconvenient ways.
And that contradiction – the brilliance and the baffling stupidity – is exactly where the real work of AI adoption lives.
Because yes, part of the challenge is on us. We have to learn how to think in ways an AI system can actually follow. We have to break work into steps, be clearer than feels natural, and design tasks the way a machine processes the world.
But let’s not pretend this is only a user problem. It’s also very much a tech-readiness problem. These tools are being rolled out faster than anyone can evaluate them, hyped to the heavens.
We’re sold the fantasy of flawless execution. What we get is brilliance wrapped in brittleness.
And nowhere is that clearer than in my recent experience with Microsoft Copilot Studio.
It ignores its instructions. Can’t find the content I want it to audit. Does an audit but ignores most of the content it’s supposed to evaluate. And, most frustrating, it keeps recapping the instructions I’ve given it, tells me it has what it needs to do the work, tells me it’s starting the audit, and then…nothing…
I nudge it: “Start the audit.”
Instead, it restates the task. It confidently tells me it’s “beginning extraction now”… and then immediately asks another clarifying question.
I tell it – politely at first – “Stop telling me what you’re going to do and just do it.”
And the agent responds with: “I wasn’t able to generate a response. Please start over.” 🤦🏻
This isn’t me misusing it. This is the product maturity showing.
These agents are being shipped fast – with enormous pressure to appear “enterprise-ready” – but too often they behave like gifted students with zero executive function. Brilliant, promising, capable in theory… and yet completely unreliable when asked to follow a simple, linear sequence.
That’s the gap that no one wants to talk about: the marketing suggests these agents can run workflows; the reality is that, in many ways, they’re still learning how to hold the pencil.
I’ve worked with my ChatGPT (whom I call MattGPT) for a while to hone the image style I use in my posts, so when I was working on this post, I went to MattGPT and asked it to build on the classroom dunce theme of the main image. It came up with great concepts, but when it came time to create one of the images, it generated a robot with three arms.
Here’s the back and forth that ensued:
(Me) “This is great but the robot has three arms. Did you do that to be ironic”
(MattGPT) Ha — no, sadly that wasn’t an artistic flourish. That was an accidental irony, which honestly just proves the whole theme of your post: AI can write 5,000 words of strategy but struggles to count limbs.
If you want, I can regenerate it with:
Just say: “Yes, regenerate it with two arms.”
Double face palm emoji. 🤦🏻🤦🏻
But ask it to apply a simple, linear set of steps – the same set of steps it just repeated back to you flawlessly – and suddenly it’s like you’re coaching a Labrador through differential equations.
This tension is what fascinates me. And frustrates me. And kind of entertains me.
AI today is simultaneously:
…depending on the prompt, the context window, and the phase of the moon.
The sooner we accept this contradiction, the sooner we can actually get value from these tools.
But what most executives aren’t prepared for is this reality:
We’re leading teams through a transformation driven by tools that are powerful, unpredictable, emotionally needy, and periodically incompetent – all at the same time.
Your team isn’t struggling because they’re not smart enough. They’re struggling because AI isn’t smart in the ways we need it to be yet.
It’s incredible at analysis, synthesis, and speed. It’s terrible at:
This isn’t failure. It’s the awkward adolescence of AI.
They are the real work.
AI adoption isn’t just policies, pilots, and platforms. It’s learning how to collaborate with technology that is smarter than anything we’ve ever used – and yet still dumb in the most human ways.
If you push through the frustration – if you laugh a little and experiment a lot – you start to see the patterns. You start to anticipate the failures before they happen. You start designing tasks in ways that the machine can actually succeed at.
That’s the advantage. That’s the unlock. That’s what separates leaders who dabble from leaders who transform their organizations with AI.
But right now? We’re all working with a technology that’s astonishing in its potential and comically bad at the basics.
The frustration isn’t just user error. It’s not just product immaturity. It’s the collision of the two – the space where human expectation meets technological reality.
If we can learn to work inside that friction – to guide the machine, adapt our thinking, and account for its flaws (instead of being fooled by the hype) – we unlock meaningful value long before perfection arrives.
Because today’s AI is the student who sometimes nails the calculus question and sometimes forgets how pencils work.
The question is whether we’re willing to keep teaching it – and keep learning ourselves – while the rest of the world waits for a level of perfection that isn’t coming anytime soon.