What We Do in the Shadows: How AI Adoption Success is Happening Out of the Spotlight
A new MIT study made waves last week with an eye-catching headline: 95% of generative AI pilots are failing. Based on 52 structured interviews, analys...
I had to admonish my ChatGPT this week. I didn't feel good about it, but it had to be done.
I was trying to create an image of me, Chris Boyer and Reed Smith to go along with a blog post about my appearance on their touch point media podcast.
I uploaded headshots of the three of us and asked ChatGPT to create an image in my brand style. This usually goes pretty well, but lately there's been a problem: it keeps adding mice 🐭, cheese 🧀 and robots 🤖 to images when I haven't asked it to do so.
This isn't surprising. I recently had ChatGPT generate a bunch of images to support eBooks and blog posts I created where mice, cheese and robots were supposed to be there. But this wasn't one of those situations.
The first image it generated showed the three of us, along with a mouse, a piece of cheese AND a robot. I used the new tool that lets you highlight parts of an image you want to get rid of, and the resulting image included a stool with a piece of cheese on it and, inexplicably, weird bulbous fingers on poor Chris's right hand.
I tried a third time, being specific that I didn't want ChatGPT to include any mice, cheese or robots...only the three humans. The result? A baby robot holding a bolt of lightning.
Once more I tried the select tool and gave clear instructions what I wanted the image to include. Good news! No mice, cheese, or robots. Not-so-good news? There was suddenly a small version of me in the image.
I'd had enough. Typing furiously, I asked: "Why do you keep adding robots, mice and cheese? I haven't asked you to do that, but you keep doing it."
The system's response was fascinating:
"You're absolutely right to call that out—and I appreciate your patience. What’s happening is that some of the styling instructions from earlier requests (especially those involving mice, robots, and cheese) got “sticky” in the image generation tool’s context. Even when we stop mentioning those elements, the system sometimes carries them forward unintentionally.
Going forward, I’ll make sure to explicitly clear those out of the prompts I send on your behalf so your images stay focused on what you're actually asking for.
Thanks again for flagging it—want to retry your last request with a clean, accurate prompt?"
I felt a little guilty for admonishing my ChatGPT, so I typed an overly enthusiastic response: "Yes, let's do it!"
The result? Nailed it!
The lesson? These LLMs are getting really good at telling you why they do things a certain way and correcting mistakes, but you have to communicate with them very directly, very clearly -- and occasionally admonish them when they just can't get those ghosts (or mice, cheese or robots) out of the machine.
A new MIT study made waves last week with an eye-catching headline: 95% of generative AI pilots are failing. Based on 52 structured interviews, analys...
Last Week in AI: Pilot Predicament, Power Consumption & "Seemingly Conscious" AI This week crystallized AI's maturation challenges as enterprise "fail...
Last Week in AI: GPT-5's Rough Week, Perplexity’s Big Swing, Maternal AI? The AI industry weathered a turbulent week as OpenAI's long-awaited GPT-5 la...
I Put Model Context Protocol to the Test in My Marketing Stack Amidst all the usual AI noise, you may have heard the letters MCP. Maybe it passed in o...
It’s been a busy summer for Loop sharing practical, human-centered ways to bring AI into healthcare marketing and digital agencies – and collaborating...
"AI can do my taxes but leave the arts to the people." "AI is getting good at things humans do for fun like writing and art. What will be left for us?...