In the Loop: Week Ending 3/1/26
Last Week in AI: Anthropic/Pentagon Gets Real, OpenAI Swoops In, White House Hockey Deepfake Last week in AI: Anthropic and the Pentagon clash over de...
Last week in AI: Anthropic and the Pentagon clash over defense use as OpenAI consolidates power in Washington and corporate America. The White House posts a deepfake of an Olympic hockey star. Smart glasses spark liability concerns. White-collar workers face productivity panic. Markets swing on AI narratives. Cultural norms shift. Guardrails lag. Mental health risks mount. And Burger King puts Patty in its employees' ears. AI isn’t looming—it’s live.
If you like to watch drama unfold, Anthropic and the Pentagon have given you a lot of popcorn-eating time this week. The two organizations are now in an open standoff over how frontier AI should be used inside government. In a recent CBS News interview, CEO Dario Amodei calls federal actions “retaliatory and punitive,” after the administration restricted Anthropic’s government access. At the center: defense use. Scientific American details how Anthropic’s safety-first posture is colliding with Pentagon ambitions, while Axios describes OpenAI stepping in and deepening its Pentagon relationship. As I wrote in “Dario: We’re Not in Kansas Anymore”, this isn’t theoretical. The fight over AI safety, power, and national security is unfolding in real time.
Meanwhile, OpenAI has taken advantage of the opening and is rapidly consolidating influence in both Washington and corporate America. As Axios reports, Sam Altman has been lobbying the Pentagon while OpenAI and Anthropic compete for defense contracts, underscoring how central AI has become to national security strategy. NPR details the political friction around banning AI weapons, highlighting the uneasy overlap between Silicon Valley and military power. Meanwhile, CNBC reports that OpenAI is partnering with consulting giants like Accenture and McKinsey to deploy frontier models across Fortune 500 firms. And in a late-night Ask Me Anything, Altman cast OpenAI as global infrastructure, not just another tech company. The through line: OpenAI isn’t just building tools—it’s accumulating policy muscle, market power, and geopolitical leverage.
Meta keeps pitching its AI glasses as seamless, ambient computing. Reality is intruding. As Gizmodo reports, the company may have inadvertently highlighted how easily the glasses can trigger legal exposure and privacy backlash, raising fresh questions about whether always-on cameras belong on people’s faces. The concerns aren’t theoretical. The Verge reports the glasses were used in an incident involving alleged doxxing connected to ICE, underscoring how quickly wearable AI can escalate real-world harm. Meanwhile, 404 Media reveals a Meta patent describing AI systems designed to simulate dead people—a concept critics call “spectral labor,” where the deceased become digital product features. The through line is getting harder to dismiss: smart glasses aren’t just futuristic gadgets. They’re portable surveillance systems wrapped in consumer branding—and a mounting corporate liability.
The AI productivity boom is starting to look more like a productivity panic. As Bloomberg reports, coding agents like Claude Code are dramatically accelerating output inside tech companies—while quietly intensifying fears that fewer human engineers will be needed to ship the same amount of work. Efficiency gains are real. So is the anxiety. MarketWatch writes that “FOBO,” the fear of becoming obsolete, has moved from worker group chats to investor calls, where markets are now pricing in both AI upside and labor downside. And in journalism, the shift is already operational. The Washington Post reports that AI-written articles are reshaping workflows at the Cleveland Plain Dealer, raising new questions about quality, trust, and staffing. The through line is unmistakable. AI isn’t a distant transformation—it’s an immediate recalibration of white-collar value, unfolding in real time across codebases, newsrooms, and earnings reports.
Even as anxiety spreads, institutions aren’t retreating—they’re reorganizing around AI as core infrastructure. At Block, one engineer joined specifically to help build AI tools internally, a sign—per Yahoo Finance—that companies are racing to weave AI directly into product development and daily operations rather than treating it as an add-on. Entire sectors are following suit. Morning Brew reports that AI is rapidly embedding itself in real estate, automating listings, marketing copy, customer outreach, and back-office processes once handled by teams of agents and assistants. The shift is structural, not experimental. And culturally, it’s personal. The Wall Street Journal writes that AI executives are already steering their own children toward adaptable, AI-fluent career paths built around creativity and technical literacy. The message is clear. While some workers brace for displacement, others are repositioning. AI isn’t just disrupting industries—it’s quietly redrawing the map of ambition, education, and long-term security.
AI hype now comes with a volatility premium. As The Times reports, a viral, doomsday-themed AI Substack helped trigger a sell-off that wiped billions from tech stocks, exposing just how sensitive valuations have become to existential narratives. In this cycle, fear spreads faster than fundamentals. MSN writes that dystopian AI outlooks are rattling already skittish investors, amplifying market swings in ways that feel less tethered to earnings and more tied to emotion. The result is a feedback loop: viral pessimism drives volatility, volatility reinforces panic, and panic becomes price action. The through line is hard to miss. In the AI era, narrative risk is financial risk. When speculation about runaway systems or mass displacement trends online, markets react in real time—turning abstract technological anxiety into immediate economic consequence.
Beyond Wall Street, AI is reshaping culture at multiple layers—from classrooms to championship arenas. Mashable reports that teenagers are turning to AI for homework help, creative experimentation, casual research, and even emotional support, weaving the technology into daily routines in ways that feel less like novelty and more like dependence. At the highest levels of competition, the cognitive shift is even more striking. MIT Technology Review writes that elite Go players are adapting their strategies to match AI-inspired play styles, effectively rewiring how the ancient game is understood and executed. And in politics and sports, manipulation risks are increasingly visible. ESPN reports on backlash after an AI-doctored White House video circulated publicly. The pattern is unmistakable. AI isn’t just a productivity tool—it’s becoming a cultural co-author, quietly influencing how people learn, compete, and perceive reality itself.
AI deployment is accelerating faster than the systems meant to contain it. VentureBeat reports that enterprises are rapidly adopting emerging AI protocols and model connectivity frameworks, often before implementing basic security controls or governance standards. The race to integrate AI into workflows is creating new attack surfaces inside organizations that may not fully understand their exposure. The risks extend beyond corporate networks. NBC News reports on a growing AI-driven child exploitation crisis, underscoring how generative tools can be misused at scale with devastating human consequences. In both cases, capability is advancing faster than constraint. The broader warning is clear. AI innovation is compounding, but oversight remains reactive, fragmented, and often under-resourced. The through line: society is building powerful systems first and debating safeguards later—leaving institutions, markets, and individuals navigating the fallout in real time.
AI’s expanding role in mental health is drawing sharper scrutiny this week. A Futurism report highlights growing concern about people turning to chatbots for emotional support amid provider shortages — and the risks when those tools aren’t designed for clinical care. At the same time, researchers uncovered more than 1,500 vulnerabilities in popular mental-health apps, including AI-powered features, potentially exposing deeply sensitive user data, according to TechRadar. Experts are also warning that chatbot interactions may reveal signs of psychosis or unhealthy emotional attachment, as reported by The Guardian, while France is pushing for international guardrails to protect children from AI-related harms, per Le Monde.
The AI era is getting stranger by the week. At Burger King, an AI assistant named “Patty” now lives inside employee headsets, offering real-time coaching—and scoring friendliness—during shifts, as Fast Company reports. Vice notes that workers see it less as support and more as algorithmic surveillance. Uber employees are joking about an “AI CEO,” according to Futurism, while elsewhere startups are selling AI-generated “handwritten” letters—synthetic intimacy at scale, as Futurism reports. Online, things get even weirder. Yahoo News describes a heated subreddit where users debate AI consciousness like theology, and the BBC reports on growing friction as AI systems behave in unsettlingly human ways.
Last Week in AI: Anthropic/Pentagon Gets Real, OpenAI Swoops In, White House Hockey Deepfake Last week in AI: Anthropic and the Pentagon clash over de...
When the Government Picks an AI Model, We’ve Entered a New Era It’s been quite a week for Dario Amodei, the CEO of Anthropic. It started last Thursday...
Last Week in AI: India Takes the Stage, Anthropic/Pentagon Clash, Bernie Sounds the Alarm AI’s expansion is colliding with accountability. OpenAI face...
Last Week in AI: "Something Big," Claude's Philosopher, Uncanny Valentine Anthropic’s safety-first identity is under strain as researcher departures, ...
Looking Back at the Week When Noise Became Signal Earlier this week, I sent my parents and siblings the widely shared essay by Matt Shumer called Some...
Last Week in AI :AI Ads Get Super, Anthropic Rattles Wall Street, Renting Human Bodies AI's Super Bowl debut marks a cultural arrival—and a reckoning....