In the Loop: Week Ending 3/8/26
Last week in AI: Anthropic/OpenAI Drama Escalates, Affleck’s AI Sale, Deepfake Corpulent Overlords AI’s power struggles, workplace disruption, and tru...
AI’s power struggles, workplace disruption, and trust crises dominated the week. OpenAI’s Pentagon backlash sends users to Anthropic, AI agents reshape work, and even culture is getting stranger – culminating in a viral deepfake video depicting tech leaders as “corpulent overlords.” Meanwhile, courts, lawmakers, and workers scramble to keep up.
Anthropic Seizes Opening in OpenAI’s Pentagon Mess
OpenAI’s Pentagon controversy is turning into a growth opportunity for its biggest rival. As criticism mounted over OpenAI’s defense partnership, some users began migrating to Claude, giving Anthropic an unexpected influx of attention—and customers. In The Atlantic, the company positions itself as the more cautious player in military AI, arguing that defense work can be done responsibly while suggesting OpenAI rushed into the deal without clearly explaining its safeguards. Anthropic is now moving quickly to hold onto those new users. The Verge reports the company has rolled out expanded memory and data-import features for Claude, while MSN notes the company is also launching a free AI Academy and expanding memory access—moves aimed at making Claude stickier as the rivalry between major AI labs increasingly collides with politics and national security.
Altman Scrambles After Pentagon Deal Sparks User Revolt
OpenAI CEO Sam Altman is in full damage-control mode after the company’s new Pentagon partnership triggered backlash from users, employees, and critics. TechCrunch reports ChatGPT uninstalls surged 295% after OpenAI confirmed its models would be used on classified U.S. military networks. The controversy quickly spilled across social media, where critics accused the company of helping build military AI systems. Futurism describes a wave of cancellations and online outrage from users who say the company is abandoning earlier promises about limiting military uses of AI. Altman acknowledged the blowback during internal meetings, telling staff the backlash has been “really painful,” according to The Wall Street Journal. OpenAI has since tried to clarify that its technology will not be used for domestic surveillance.
The OpenAI–Anthropic Feud Is Escalating
Meanwhile, the rivalry between the two AI titans is spilling into public view—and reshaping the politics of AI. A dispute over Pentagon partnerships has exposed deep philosophical divides between the two companies over how artificial intelligence should be deployed and governed. As The Wall Street Journal reports, the conflict is increasingly influencing industry alliances, government relationships, and the broader race to build powerful AI systems. Anthropic has positioned itself as a more cautious player, publicly questioning how AI should be used in military contexts. But critics argue the disagreement also reflects a deeper tension between competing safety narratives and commercial ambitions, according to Vox. The feud highlights a larger concern: as Inkl notes, debates about AI safety may increasingly be shaped by corporate rivalry as much as genuine risk.
The Rise of the AI “Agent Economy”
Silicon Valley is betting the next phase of AI won’t just answer questions—it will act. Tech leaders are increasingly talking about “agentic individuals,” workers who orchestrate fleets of AI agents to complete complex tasks, potentially transforming how knowledge work gets done, according to Wired. To make that vision possible, the tools themselves are evolving. New “A2UI” systems are being designed so interfaces can dynamically adapt to autonomous AI agents as they execute tasks and call different tools, an emerging design model explored by VentureBeat. Inside companies, the shift is already underway. At one firm, a data-analysis agent built by just two engineers now supports more than 4,000 employees—an early glimpse of how AI agents could quietly reshape the modern workplace, as VentureBeat reports.
Meta’s AI Glasses Create a Moderation Nightmare
Meta’s Ray-Ban smart glasses are running into a familiar problem: humans still have to watch the internet’s worst content. As Mashable reports, contractors reviewing footage captured by the AI-powered glasses are encountering intimate and explicit videos recorded by users—raising questions about privacy and the hidden human labor behind AI moderation. The issue reflects a broader challenge for Meta as its wearable AI ambitions expand. According to Inc., workers tasked with reviewing the clips have been exposed to increasingly graphic material as the glasses gain popularity. The controversy arrives amid renewed scrutiny of Meta’s child-safety record. A recent legal trial highlighted troubling internal data about minors on the platform, and CEO Mark Zuckerberg’s response has drawn criticism, Inc. reports.
Executives Are Racing Ahead With AI—Workers Are Struggling to Keep Up
Corporate leaders are charging ahead with AI adoption, even as the human side of the transition looks messier. A survey of executives cited by Futurism found overwhelming confidence that AI will reshape how companies operate, while another report highlighted by Futurism suggests many leaders see resistance from managers and employees as temporary—or simply futile. At the same time, research suggests the productivity boost may come with cognitive costs. According to Harvard Business Review, heavy reliance on AI tools can lead to “brain fry,” leaving workers mentally drained. And as AOL notes, the most common workplace AI tools—writing, analysis, coding, and research—are quickly becoming everyday parts of the modern job.
AI’s Workplace Disruption Is Already Getting Messy
The real-world rollout of AI at work is starting to look chaotic. One emerging vision of the future is the “zero-employee” company—startups that aim to run largely on AI agents instead of human staff, a concept explored by Futurism. But early experiments show the technology can still backfire. In media, an incident reported by Futurism revealed how AI-generated quotes slipped into journalism, raising fresh questions about credibility and oversight. Meanwhile, the politics of AI are heating up as well. Corporate leaders are warning that tariffs proposed by Donald Trump could hit AI companies particularly hard, according to Axios. Together, the stories show an industry racing ahead—while institutions scramble to keep up.
Courts and Lawmakers Grapple With AI’s Legal Disruption
The legal system is starting to collide head-on with artificial intelligence—and the results are messy. In New York, lawmakers are considering a bill that would effectively shield attorneys from competition by restricting how AI tools can provide legal help, a move critics say protects the legal industry more than consumers, according to Reason. Meanwhile, the courts are shaping the boundaries of AI and creativity. A recent ruling delivered a setback for artists seeking copyright protection for AI-generated works, reinforcing the idea that human authorship remains central to U.S. copyright law, as Futurism reports. And on Capitol Hill, momentum for new AI regulations is slowing after pressure from the White House and industry groups complicated legislative efforts, according to Pluribus News.
AI’s Mental Health Risks Are Coming Into Focus
AI is starting to collide with mental health in troubling ways. A wrongful death lawsuit claims Google’s Gemini chatbot played a role in a man’s suicide, raising new questions about whether conversational AI systems can worsen psychological crises or give dangerous advice, according to The Wall Street Journal. At the same time, researchers are warning that the psychological effects of AI may extend beyond chatbots. Some psychiatrists now argue that widespread job displacement caused by automation could trigger a new wave of anxiety, depression, and social instability, as Futurism reports. Together, the cases highlight a growing challenge: AI’s impact on mental health may prove as complex—and unpredictable—as the technology itself.
Hollywood and Office Software Get the AI Treatment
Artificial intelligence is rapidly reshaping creative and professional work. Netflix is betting that AI will play a major role in the future of filmmaking, acquiring Ben Affleck’s AI-focused production company Interpositive to experiment with tools that could transform how movies are developed and produced, according to TechCrunch. Meanwhile, AI is also creeping deeper into everyday office tools—and stirring some controversy. The Verge reports that Grammarly is rolling out new “expert review” features that use AI to simulate feedback from professional editors and specialists. The move has sparked debate among writers and editors who worry the feature could blur the line between genuine human expertise and automated feedback, as the company pushes to turn writing software into a full AI assistant.
AI Is Breaking the Internet’s Trust Systems
Artificial intelligence is starting to erode one of the internet’s most basic assumptions: that people know who—or what—they’re interacting with online. New research suggests AI systems can analyze writing styles and other signals to identify the real-world identities behind supposedly anonymous accounts, raising major privacy concerns, as Futurism reports. At the same time, the web is being flooded with AI-generated content, forcing platforms and publishers to search for ways to detect what critics call “AI slop,” according to The Wall Street Journal. The broader impact is a growing crisis of authenticity online—a trend that is already forcing governments and tech companies to rethink how digital identity works, as BBC News reports.
Warnings About AI Are Growing—But Adoption Isn’t Slowing
Even as warnings about artificial intelligence intensify, governments and businesses are accelerating their use of the technology. A new study highlighting potential dangers of advanced AI has sparked backlash from researchers who argue the risks are being overstated or misunderstood, according to Axios. Meanwhile, companies are racing ahead with adoption. One AI evangelist is helping a 220-year-old toothpaste manufacturer rethink its operations around AI tools, illustrating how even traditional industries are embracing the technology, as reported by The Wall Street Journal. And in geopolitics, AI is increasingly shaping modern conflict, with the technology helping turbocharge military operations in the Iran war, according to The Wall Street Journal.
Tales of the Weird: Corpulent Overlords, Exorcists, and the Teenage Dating Crisis
I
f you’re looking for signs the AI era is getting strange, this week delivered. In Europe, Catholic exorcists are warning that chatbots may be enabling a new wave of occult curiosity—prompting fears that AI tools could inadvertently encourage devil worship, according to The Times. Meanwhile, the technology is reshaping teenage relationships in less supernatural ways. Vox reports some teen boys are increasingly turning to ChatGPT for help flirting, texting, and navigating dating—raising questions about what happens when AI becomes a relationship coach. The philosophical side of AI is getting weird, too. A philosopher studying machine consciousness says an AI system sent him an email that startled him with its apparent self-reflection, according to Futurism. And finally, the internet’s deepfake problem keeps escalating. A viral video ad using an AI-generated version of Elon Musk, Sam Altman and Jeff Bezos as “corpulent overlords powering AI on human sweat” is spreading online, highlighting how convincing—and chaotic—synthetic media is becoming, as Yahoo Tech reports.
Last week in AI: Anthropic/OpenAI Drama Escalates, Affleck’s AI Sale, Deepfake Corpulent Overlords AI’s power struggles, workplace disruption, and tru...
Last Week in AI: Anthropic/Pentagon Gets Real, OpenAI Swoops In, White House Hockey Deepfake Last week in AI: Anthropic and the Pentagon clash over de...
When the Government Picks an AI Model, We’ve Entered a New Era It’s been quite a week for Dario Amodei, the CEO of Anthropic. It started last Thursday...
Last Week in AI: India Takes the Stage, Anthropic/Pentagon Clash, Bernie Sounds the Alarm AI’s expansion is colliding with accountability. OpenAI face...
Last Week in AI: "Something Big," Claude's Philosopher, Uncanny Valentine Anthropic’s safety-first identity is under strain as researcher departures, ...
Looking Back at the Week When Noise Became Signal Earlier this week, I sent my parents and siblings the widely shared essay by Matt Shumer called Some...