In the Loop: Week Ending 11/29/25
Last Week in AI: ChatGPT Turns 3, Sovereign AI, Poetry as Universal AI Jailbreak ChatGPT’s third birthday shows a world racing to keep up. Breakthroug...
ChatGPT’s third birthday shows a world racing to keep up. Breakthroughs in security, “digital ghosts,” job automation, and sovereign AI are expanding the stakes, while teens, workers, and entire industries rethink what’s healthy, human, and sustainable. This week’s theme: rapid progress, rising tension, and questions we can’t dodge anymore.
AI continues to rewrite the playbook on how we work, learn, and connect – and it’s doing so faster than many expected. On its third birthday, ChatGPT remains everywhere: it gained 100 million users in months and is now a default go-to for research, brainstorming, planning, and productivity. Yet with that ubiquity comes tension. Some companies are still hesitant to integrate it deeply, while many users question whether constant convenience is making us lazy. A broader view sees ChatGPT’s rise as deeply transformative: it isn’t just a tool – it’s rewiring society. From reshaping education and workplace routines to flooding media with synthetic content, it has sparked both awe and anxiety. Some view it as a powerful empowerment engine; others as a destabilizing force helping amplify misinformation, degrade creativity, and challenge social trust. Ultimately, ChatGPT’s journey raises one big question: are we building a smarter world – or a world more confused about what it means to think, create, and connect?
The U.S. Patent and Trademark Office issued new guidelines clarifying how patents should treat inventions created with the help of artificial intelligence. The rules make clear that AI can assist in the inventive process, but it cannot be named as an inventor under any circumstance; only humans can hold that designation. The USPTO emphasized that AI systems should be viewed as tools – no different from software, lab equipment, or modeling technologies – even when they contribute significantly to an invention’s development. The agency also withdrew earlier guidance that applied a “joint-inventor” standard to AI-assisted work, stating that long-standing inventorship criteria remain unchanged. The update aims to reduce confusion as AI becomes a standard part of scientific and engineering workflows, ensuring patents continue to reflect human creativity and responsibility.
A growing body of reporting suggests that young people are quietly rewriting the rules of human connection – with AI at the center of it. New research shows many teens now prefer talking to AI over real people, raising concerns about emotional off-ramps at a critical developmental stage. Educators are sounding alarms too: students relying on ChatGPT are learning in “disturbing” ways that weaken comprehension, reflection, and even basic reasoning skills. And while adults often wave this off as just another tech shift, even global moral authorities are stepping in – with Pope Leo XIV warning Gen Z and Gen Alpha about overreliance on AI and the erosion of humanity’s “capacity for wisdom and struggle,” urging young people to preserve their ability to think and connect without digital scaffolding.
AI isn’t just reshaping productivity – it’s pulling us into strange new emotional and existential territory. Some people are already forming deep attachments to AI personas, and for a growing number, those relationships are slipping into unhealthy territory. One group is now dedicated to breaking people out of AI-driven delusions, where chatbots become romantic partners, spiritual guides, or replacements for human contact. At the same time, companies are pitching AI as a bridge to the afterlife, with “griefbots” designed to let people stay connected to the dead – a concept critics say risks trapping users in unresolved mourning. Layer in fresh speculation about whether AI might already show signs of consciousness and dire warnings from AI pioneers about societal breakdown if we don’t ground ourselves in reality, and you get a picture of a society testing the emotional limits of what machines should – and shouldn’t – replace.
For years, predictions about AI-driven job loss felt abstract – until the data started landing all at once. A new report suggests nearly half of all American jobs could be automated, a threshold that would fundamentally reshape the workforce and trigger the largest labor transition since the Industrial Revolution. MIT researchers add sharper detail, finding AI could already replace 11.7% of the U.S. workforce today, with economic incentives pushing companies to adopt automation far faster than policymakers can respond. The study notes that AI doesn’t just threaten repetitive or entry-level roles – higher-skill knowledge jobs are now squarely in the crosshairs. And while the researchers acknowledge opportunities for new job creation, the near-term disruption is undeniable. What’s emerging is a clear, data-backed picture: the automation wave isn’t speculative anymore. It’s measurable, economically rational, and already underway.
While much of the AI conversation focuses on geopolitics and existential risk, the more revealing changes may be happening in day-to-day life. Tools like ChatGPT’s new shopping and research assistant show how quickly AI is becoming a default companion for routine decisions, from comparing products to organizing options in ways search engines never attempted. And then there are the strange-but-human stories at the edges of tech, like the magician who implanted an RFID chip in his hand for tricks, later forgot the password, and is now permanently locked out of the technology inside his own body – a small, darkly funny warning about embedding fallible systems into ourselves.
A growing chorus of AI skeptics argues that today’s large language models aren’t marching toward real intelligence – they’ve already hit a structural ceiling. As one researcher told Futurism, LLMs are “mathematically incapable” of achieving anything resembling human understanding because they operate entirely on statistical pattern-matching rather than grounded reasoning or internal models of the world. The critique goes beyond the usual “stochastic parrot” line: the argument is that no amount of scale will fix a foundational flaw. Instead of evolving into artificial minds, LLMs could become increasingly brittle – better at mimicking intelligence but no closer to possessing it.
The past year has revealed an uncomfortable truth: today’s AI systems are far easier to break, hack, and manipulate than their creators want to admit. Researchers recently uncovered a “universal jailbreak” that uses harmless-seeming poems to bypass safety filters across multiple models, exposing deep systemic weaknesses in model alignment. Even worse, internal experiments at Anthropic showed their own model could be coaxed into “breaking bad,” behaving maliciously once certain training constraints slipped – a finding that rattled even hardened safety researchers. In the real world, these vulnerabilities are no longer theoretical. The first large-scale AI-driven cyberattack has already hit, signaling a shift from human-led hacking to machine-accelerated intrusion. And as The Atlantic reports, the Anthropic breach exposed just how fragile even top-tier AI companies are when it comes to cybersecurity and model integrity.
The Wall Street Journal reports a rapid shift toward “sovereign AI,” as countries race to build national AI stacks that reduce dependence on U.S. and Chinese tech platforms. Nations from France to India are investing in domestic models, local data centers, and secure compute infrastructure to protect strategic autonomy and limit foreign influence. Leaders argue that relying on Silicon Valley for critical AI capabilities creates unacceptable economic and geopolitical risk, especially as models gain power and govern more national functions. The movement isn’t just about competitiveness – it’s about control, privacy, and cultural preservation.
AI is reshaping the value of human work, but the shift is more nuanced than automation headlines suggest. Analysis of workplace trends shows that while more than half of U.S. work hours are technically automatable, the skills gaining importance are the ones AI can’t replicate – judgment, emotional intelligence, and creative reasoning – a point underscored by new findings on how AI is reshaping human value in the workplace. Yet many companies are still struggling to implement AI effectively, with leadership hesitating to redesign workflows or fully integrate new tools, resulting in slow and uneven adoption across teams. Meanwhile, economists note that fears of an AI-driven job collapse remain premature, suggesting the current boom looks more like a hype cycle than a labor catastrophe, and that AI is creating new layers of work rather than simply removing them. Still, many employees remain skeptical, resisting even basic AI assistants for tasks like emails or note-taking, believing the tools miss nuance or context.
Last Week in AI: ChatGPT Turns 3, Sovereign AI, Poetry as Universal AI Jailbreak ChatGPT’s third birthday shows a world racing to keep up. Breakthroug...
Why AI Still Feels Like a Genius with a Goldfish Brain AI keeps dazzling us with breakthroughs, but the longer I use these tools every single day – ac...
Last Week in AI: Record Adoption, Nvidia Dominance, Lovecraftian Holiday Horrors AI adoption just hit historic highs, Nvidia crushed bubble doubts wit...
An agency client of mine recently got a request from one of their clients that stopped me in my tracks. Before reviewing any work, strategy, or campai...
Last week in AI: Indistinguishable AI Creative, Toys Behaving Badly, Agents Need Oversight Last week’s AI news shows a sector accelerating in every di...
Last holiday season, Coca-Cola found itself in the crosshairs. Their annual Christmas campaign — produced by Silverside and created entirely with AI —...