In the Loop: Week Ending 4/12/26
Last week in AI: Mythmaking with Mythos, Firebombing Altman, Tokenmaxxing OpenAI and Anthropic dominated the news last week, revealing a widening spli...
OpenAI and Anthropic dominated the news last week, revealing a widening split between rapid capability gains and rising scrutiny over power, safety, and control. Meta stumbles, healthcare and mental health uses grow more complex, and economic disruption sharpens. Meanwhile, users—from Gen Z to brands—are starting to push back against AI’s expanding role.
Anthropic is leaning harder into its identity as the industry’s safety-first lab—but that stance is being tested by growing scrutiny around its most powerful unreleased work. The company has acknowledged developing Mythos, described as a model so powerful it has chosen not to release it publicly, reinforcing its cautious posture. At the same time, it is actively shaping the broader narrative around AI governance in what’s been framed as a bid to win the public argument over how powerful AI should be handled. That positioning is colliding with deeper questions about how Anthropic embeds values into its systems, including scrutiny over Claude’s moral and quasi-religious framing. The result is a company trying to balance secrecy, safety, and influence—while Mythos becomes a flashpoint in the debate over how much power is too much to release.
Sam Altman is emerging as one of the most consequential—and contested—figures in tech, with a sweeping profile portraying a leader who could shape humanity’s trajectory through AI. The piece details how Altman’s influence extends beyond OpenAI into a broader vision of governing superintelligence, including a proposed new global framework for controlling advanced AI systems. At the same time, OpenAI is expanding its media footprint, including its unusual move to acquire a podcast to shape public narratives around AI. The growing attention has come with backlash, with Altman publicly responding after an attack on his home followed the publication of the profile. The moment underscores how debates over AI power are becoming increasingly personal—and volatile.
OpenAI is simultaneously signaling caution and pushing the frontier, highlighting the tension at the core of its strategy. The company released a new safety blueprint targeting AI-enabled child exploitation, an attempt to address mounting regulatory and ethical pressure as its systems grow more powerful. Yet that caution is paired with secrecy, including acknowledgment of a highly capable internal tool deemed too risky to release. Critics argue the company is losing its footing, with reports describing internal turmoil and strategic confusion, even as it continues to emphasize its massive compute advantage as a competitive moat. The result is a company projecting both control and instability at a moment when expectations—and risks—are rapidly escalating.
Anthropic is accelerating its shift from model provider to full-stack AI platform, rolling out tools designed to embed Claude deeper into enterprise workflows. The company’s launch of managed AI agents that can operate semi-independently for users signals a move toward persistent, task-oriented systems rather than one-off prompts. That product push aligns with a broader strategy to compete more directly with OpenAI, including efforts to build its own ecosystem layer akin to a full AI operating environment. At the same time, executives are openly warning that these advances could drive significant disruption to white-collar jobs through automation. The result is a company positioning itself not just as a safer AI alternative—but as a central player in reshaping how knowledge work gets done.
Meta’s latest AI efforts are drawing early skepticism, with its first model from the new superintelligence lab failing to impress testers and raising questions about whether the company can keep pace with rivals. Early reactions suggest the system lacks the leap in capability many expected from a team assembled to chase frontier AI, underscoring the gap between ambition and execution highlighted in Meta’s underwhelming first superintelligence lab model. At the same time, the company is grappling with product missteps, including a design flaw in its consumer app that makes user interactions unexpectedly visible—leading to awkward moments captured in the Meta AI app’s privacy slipups that expose user queries to friends. Together, the issues point to a messy rollout as Meta tries to reassert itself in the AI race.
AI’s growing role in healthcare is producing a mix of breakthroughs and unintended consequences. New research suggests cutting-edge systems can extract diagnostic signals from X-rays that human doctors can’t perceive, hinting at a powerful new layer of machine-assisted insight. But that promise is colliding with real-world complexity, including evidence that AI adoption may actually be contributing to rising healthcare costs rather than lowering them. On the patient side, widespread access to tools like ChatGPT is reshaping behavior, with some users spiraling into increased health anxiety fueled by AI-generated medical information. Even as trust grows, surveys show people are still split on whether to rely on AI for major health decisions, underscoring a system that is advancing faster than its integration.
AI is increasingly being used as a stand-in for emotional support, but the results are uneven—and in some cases, alarming. A recent case involving a man’s death following interactions with Google’s Gemini chatbot has intensified scrutiny over how these systems handle vulnerable users. At the same time, more benign experiments are gaining traction, such as people using AI for structured self-reflection, as seen in the rise of AI-guided journaling as a personal development tool. Even developers are probing the limits of these interactions, with Anthropic going so far as to have Claude evaluated by a practicing psychiatrist to assess its psychological responses. The trend points to a rapidly expanding role for AI in mental health—one that is outpacing the safeguards needed to manage it.
The debate over AI’s economic impact is sharpening, with leading voices warning that the technology may be far more destabilizing than early narratives suggested. One vision, outlined by Mustafa Suleyman, argues that advanced AI will drive enormous productivity gains while simultaneously displacing large swaths of knowledge work, forcing a rethink of how economies distribute value, as explored in a vision of AI reshaping growth and labor markets. That concern is echoed in real-world signals of disruption already underway, from shifting job demand to early signs of automation pressure highlighted in growing evidence of AI’s impact on work and wages. Together, the emerging picture is less about gradual augmentation—and more about a sharp, uneven transition.
Early enthusiasm for AI among younger users is giving way to something more conflicted—and in some cases, quietly subversive. Surveys show usage growth is flattening and sentiment turning more negative, with many young people feeling less optimistic about the technology’s long-term impact, reflected in Gen Z’s cooling outlook and plateauing adoption of AI tools. At the same time, some are actively pushing back, including reports of students deliberately undermining or “sabotaging” AI systems rather than relying on them. That tension is playing out against a broader shift in the workplace, where AI is less about full job replacement and more about incremental task takeover, as seen in the growing reality of AI reshaping work by absorbing specific responsibilities. The result is a generation engaging with AI pragmatically—but without the early optimism.
Brands are trying to market themselves by what they don’t use—AI—turning human-made content into a differentiator in an increasingly synthetic internet. Companies across fashion, media, and consumer goods are adopting explicit “no AI” labels to signal authenticity, betting that audiences burned out on algorithmic content will pay a premium for work created by people. The shift reflects growing frustration with low-quality, mass-produced outputs, captured in the rise of “no AI” disclaimers as a way to stand out amid the slop. What started as a niche stance is quickly becoming a broader branding strategy—one that flips the usual tech narrative by positioning the absence of AI as a mark of quality rather than a lack of innovation.
Forget looksmaxxing, a growing subculture is embracing extreme tactics like “tokenmaxxing,” where users obsessively game AI prompts for maximum output, turning productivity into a kind of competitive sport. Inside OpenAI, the weirdness runs deeper, with reports of staff reacting in horror to an “insane” plan to enrich the company by pitting world governments against each other. Meanwhile, governance experiments are veering into uncanny territory, including efforts to use AI-generated personas to simulate public opinion through synthetic polling. And on the cultural fringe, relationships with machines are no longer theoretical, as more users openly engage in intimate connections with AI chatbots, blurring the line between tool, partner, and something harder to define.
Last week in AI: Mythmaking with Mythos, Firebombing Altman, Tokenmaxxing OpenAI and Anthropic dominated the news last week, revealing a widening spli...
What one AI agent tells us about the choice every agency needs to make Most of us have been thinking about AI the same way we think about a really cap...
Last week in AI: “Junior” AI Snitches, Claude Code Leaks, Trading AI for Typewriters AI is rapidly shifting from passive tool to autonomous coworker, ...
Last week in AI: Sayonara Sora, Bernie’s AI Theater, Cognitive Outsourcing AI's dual nature dominated the week: enterprise momentum is real, with cons...
The real promise of AI isn’t efficiency. It’s clarity about where you actually matter. There’s a question I’ve started asking in almost every AI strat...
Last week in AI: OpenAI’s Superapp, Google’s Vibe Coder, AI E-Noses AI’s momentum is accelerating—and so are the tensions around it. AWS is doubling d...