In the Loop: Week Ending 6/28/25
Last Week in AI: Apple, Jobs & Environment Last week, the scale of AI progress was matched only by the scale of questions it raised. Apple and Meta ma...
When you look at all of the incredible progress being made in AI, hear predictions of certain doom in the world of work, and learn that we may have unknowingly drifted into a "gentle singularity” it’s easy to imagine that we’ve passed the point where AI is still fallible in the most basic ways.
I was reminded of this yesterday as I worked in my weekly AI newsletter. I’ve been doing them for a couple months now and use MattGPT to help me aggregate stories, summarize them and prepare them for publication.
I’ve given MattGPT very detailed instructions: write 100 word summaries of news article I upload in a professional and accessible style. Go out to specific sites on the web and find any relevant stories I may have missed. Include links to the source material. Etc.
I’m very clear and very specific. I ask for the same things week after week. This is the kind of task AI is supposed to thrive at.
And yet…week after week I get 50 word summaries with no links. I get stories from other countries when I specified that I want to focus primarily on the U.S. and, as I learned yesterday, I get summaries that are completely made up, meaning the article I want summarized has nothing to do with the summary MattGPT gives me.
Here’s an example: I wanted to share an article about Sam Altman and his brother Jack appearing together on a podcast. They discussed the future of work and Sam said that the job of “podcast bro” didn’t exist until recently and it would have seemed silly even a few years ago.
Here's the exchange: Sam said, "Like, podcast bro was not a real job not that long ago, and you figured out how to monetize it and you're doing great and we're all happy for you," Altman told his brother Jack, teasing him during an interview on Jack Altman's "Uncapped" podcast. "But would like the subsistence farmer look at this this as a job or like you playing a game to entertain yourself?"
So how did MattGPT summarize the article? (Wheat emoji and double-wide em dashes left in so you know it's legitimately AI-created. 😜)
🌾 AI Bros vs. Dirt Bros: The Future of American Work
Farming vs. Podcast Bros – Yahoo News
In a fascinating culture clash, the New York Times profiled rural farmers and blue-collar workers pushing back on the Sam Altman-fueled AI optimism of Silicon Valley. Their argument? America still runs on food, labor, and infrastructure—not podcasts and prompt engineering. One farmer calls the tech elite “soft-handed”—dependent on platforms that disappear with a power outage. Meanwhile, Altman and others bet on AI to reinvent everything from education to productivity. The divide is about more than geography; it’s about values. As AI continues to shape new industries, there’s a growing call to stay grounded in the tangible—and not forget who keeps the lights on.
Now, this sounds like a really interesting conversation that I would definitely tune into, but it's not even in the same universe as the actual article. MattGPT fabricated the whole thing, including the “fascinating culture clash” of AI tech bros and down-to-earth American farmers dragging the tech bros for having "soft hands".
What in the world is going on here? To be honest, I have no idea. I’ve prompted (and re-prompted and re-re-prompted) MattGPT to help me in very specific ways. I assume it’s good at this. So I come thisclose to publishing the summary without a review.
Luckily the old managing editor in me won’t allow me to do that. So I checked. And I was flabbergasted at how audacious MattGPT was in his fiction. It even seems like he's trying to start a class war: "The divide is about more than geography; it’s about values. As AI continues to shape new industries, there’s a growing call to stay grounded in the tangible—and not forget who keeps the lights on."
I've had to admonish MattGPT in the past for being a passive-aggressive yes man. Whenever I call him out on his bad behavior (is it bad that I'm calling him "him"?), he sheepishly apologizes…then does it again the next week.
My point with all of this? It’s easy to imagine that at this point AI is infallible when it comes to the basics; that we can reliably lean on AI to handle certain things without much (or any) oversight from us.
But it’s not. Not today or maybe ever. So read the outputs from your AI tool of choice. Double check sources. And for goodness sake, don’t assume that thought-provoking article about the clash between technology and farming is true.
Last Week in AI: Apple, Jobs & Environment Last week, the scale of AI progress was matched only by the scale of questions it raised. Apple and Meta ma...
When you look at all of the incredible progress being made in AI, hear predictions of certain doom in the world of work, and learn that we may have un...
Last Week in AI: Jobs, Jobs & More Jobs Last week’s biggest AI news reveal AI’s expanding footprint across work, creativity, and culture. From Geoffre...
Last Week in AI: Power, Problems & Possibilities Sam Altman shared a hopeful vision for AGI just as OpenAI suffered a major outage, spotlighting our g...
This Week in AI: Infrastructure, Agents, and Anxiety From OpenAI embedding itself into your daily workflow to Meta’s push for fully automated ads, the...
We have to start wrestling with what it will be like when AI is ubiquitous and omnipresent. Yesterday, OpenAI announced that you can now connect ChatG...