I was talking with a senior agency leader recently, and she said something that hit me right between the eyes. I asked her how AI adoption was going at her agency and in her daily work, and she said:
“The question isn’t whether AI can help – it’s whether I’ll remember to stop and use it. When you’re in the flow of work, muscle memory takes over. You don’t pause to ask whether this is the best use of your time, or whether a piece of it could be handed off to AI. You just do it yourself.”
What hit me between the eyes was this: we will never get the value from AI that we’ve been promised if we constantly have to remember when, why, and how to use it. We’re creatures of habit. When the pressure is on and work is moving fast, we default to what we know. And right now, there’s very little about AI in the day-to-day reality of agency work that’s compelling enough to override decades of muscle memory.
This isn’t a training problem. And it’s not really a tooling problem either. It’s a muscle-memory problem.
Recent research by Gallup shows that more people are experimenting with AI at work, more tools are showing up inside organizations, and AI is increasingly part of the day-to-day fabric of knowledge work. But there’s also a growing disconnect between expectation and experience. Leaders talk about efficiency and transformation; employees talk about uneven value, unclear impact, and the challenge of figuring out how these tools actually fit into their jobs. In some cases, confidence in AI’s benefits is eroding even as adoption accelerates.
So what’s going on?
Part of the answer is that we’ve reached the stage of AI adoption where the magic we were sold is being replaced by a more sobering reality: getting meaningful value from these technologies takes real work. It requires thought, discipline, and intention.
But there’s another piece of this—one that I think is even more important.
Our relationship with AI at work is fundamentally backwards.
We’re told AI will make us more efficient and effective, freeing us up for the work that requires judgment, creativity, and experience. But all of that promise collapses if the burden is on humans to remember to ask for help in the middle of an already overloaded day. When AI requires us to interrupt our own momentum to be useful, it’s not surprising that it gets forgotten – or used inconsistently, or only by the most motivated early adopters.
The real cost of this isn’t just inefficiency. It’s decision fatigue. It’s diluted thinking. It’s senior judgment being applied too late – or not at all – because it’s squeezed in between meetings, emails, and deliverables.
What if AI didn’t wait for us to remember to ask for help?
What if it ran alongside our work – quietly handling the routine, absorbing context, tracking patterns – and only surfaced itself when human judgment, creativity, or strategic insight was actually required?
This shift – from humans managing AI to AI managing escalation to humans – is the core idea behind an AI intelligence platform we’re creating, aimed at redesigning how agencies operate and how they apply their most valuable resource: human judgment.
Most agencies today aren’t short on activity. They’re short on space to think. Humans are still doing too much low-value work, while the moments that truly require experience and discernment get compressed or rushed. How many agencies do you know that are so focused on creating and trafficking dozens—or hundreds—of asset variations that they run out of time to step back and produce a clear, insightful client performance narrative? Or to pause before approving a recommendation deck and ask, does this actually reflect what we know?
The model we’re building flips that dynamic.
The AI runs continuously in the background – ingesting information, preserving context, handling the block-and-tackle work of an agency – while actively watching for the moments where a human is truly needed. Not everywhere. Not all the time. But precisely when the decision carries weight, when context matters, and when judgment makes the difference.
The goal isn’t to make humans work faster or take away the things we find value in doing. It’s to train AI to raise its hand and let the human know that their intelligence is needed – rather than the other way around.