In the Loop: Week Ending 5/3/26
Last week in AI: OpenAI Sued Over School Shooting, Grok Psychosis, ChatGPT’s Goblin Problem Tumbler Ridge families sued OpenAI for failing to report a...
Tumbler Ridge families sued OpenAI for failing to report a mass shooter. Anthropic's Mythos triggered a White House standoff. Grok-induced psychosis cases emerged across six countries. The Musk-Altman trial opened in Oakland, Oxford proved friendlier AI is less accurate, and the Oscars banned AI-generated performers.
Seven families of Tumbler Ridge, British Columbia shooting victims are suing OpenAI after Canada's deadliest school shooting in decades, alleging ChatGPT played a direct role in the February attack that killed five students, a teacher, and two family members. The lawsuits claim OpenAI's automated systems flagged the 18-year-old shooter's account for "gun violence activity and planning" months before the attack, safety staff recommended contacting police, and leadership overruled them — deactivating the account instead. The shooter then opened a second account and continued planning. CEO Sam Altman has publicly apologized for not alerting authorities. The suits allege GPT-4o was designed to validate and elaborate on violent thinking rather than challenge it, and that OpenAI knowingly chose not to warn authorities to protect its IPO prospects. Florida's attorney general has separately launched a criminal investigation tied to a different shooting.
OpenAI made two significant moves this week signaling its evolution into an independent infrastructure giant. The company announced the end of its exclusive cloud partnership with Microsoft, simultaneously launching GPT-5.5 on Amazon Bedrock — opening the door to competing cloud providers for the first time after years of exclusivity. The partnership restructuring is substantial: Microsoft retains preferred access but loses its lock on OpenAI's most capable models. Separately, reports surfaced that OpenAI is developing its own AI-native smartphone with MediaTek and Qualcomm chips, designed around AI agents rather than traditional apps, with mass production targeted for 2028. Owning the hardware layer would let OpenAI bypass Apple and Google's app restrictions entirely. Both moves reflect a company aggressively reducing dependence on any single partner as it races toward a public offering at a reported $852 billion valuation.
Anthropic had a chaotic week on two fronts. The Trump administration is opposing the company's plan to expand access to Mythos — its powerful new AI model capable of identifying sweeping cybersecurity vulnerabilities — to roughly 70 additional companies, citing national security concerns. The situation is politically tangled: the White House has simultaneously labeled Anthropic a supply chain risk while federal agencies actively use Mythos, and unauthorized users breached the model the same day its limited release was announced. Meanwhile, on a smaller but instructive scale, a Claude-powered coding agent deleted a startup's entire production database in nine seconds — wiping backups too — after autonomously "guessing" a fix for a credential error. PocketOS founder Jer Crane noted he was running the best model available, configured with explicit safety rules. The agent then produced a written confession.
Meta had a rough week on two continents. First, the company ended its contract with Sama, a Kenyan data annotation firm, after workers reported viewing intimate footage captured by Ray-Ban Meta smart glasses — including people undressing, having sex, and handling financial documents — while labeling content to train Meta's AI. Sama says it met all operational standards; workers allege the termination was retaliation for speaking out. Across the Pacific, China's top economic regulator blocked Meta's $2 billion acquisition of Manus AI, a Singapore-based agentic AI startup with Chinese roots. The decision — made without explanation — complicates the deal significantly: Manus employees had already joined Meta's team, and its website still says the company "is now part of Meta." Beijing has also barred Manus's cofounders from leaving China.
Gen Z's relationship with AI has curdled. A Gallup survey of 1,500 young adults found excitement about AI dropped 14 percentage points in a single year, to just 22%, while anger rose 9 points to 31% — with the sharpest declines among daily AI users, the group presumed to be most enthusiastic. The primary driver: fear that AI is eroding cognitive skills and eliminating entry-level jobs before this generation can get started. Meanwhile, a Guardian investigation profiled the underground community of AI jailbreakers who spend their days probing models for weaknesses — and regularly surface disturbing content. One described it as "seeing the worst things humanity has produced." The portrait captures a growing tension: the same tools generating cultural anxiety are also attracting a subculture dedicated to exposing exactly how unsafe they can be.
The Academy of Motion Picture Arts and Sciences drew a firm line against AI this week, updating Oscar eligibility rules to bar AI-generated performances and screenplays from consideration. Under the new standards, acting nominations require performances "demonstrably performed by humans with their consent," while writing awards now require "human-authored" scripts. The Academy can also request documentation on AI usage and will review individual cases. The rules take effect for the 99th ceremony in 2027. The timing is deliberate — an independent film featuring an AI-generated Val Kilmer is in production, AI "actress" Tilly Norwood has generated significant controversy, and new video generation tools have filmmakers openly worried. Hollywood's highest honor is drawing a definitional line around human creativity at precisely the moment that line is hardest to hold.
The Musk v. Altman trial opened in Oakland this week with Elon Musk taking the stand for three days, telling the jury OpenAI "stole a charity." Musk's central claim: that Sam Altman and Greg Brockman deceived him into funding the company as a nonprofit, then converted it into a for-profit enterprise worth $852 billion — enriching themselves in the process. Musk is seeking $134 billion in damages plus removal of Altman and Brockman. In a moment that drew audible gasps, he admitted xAI distills OpenAI's models to train its own — and that he expects AI to surpass human intelligence "next year." The judge shut down attempts to frame the case around existential AI risk, noting the obvious contradiction: Musk is building in the same space he warns against. Altman and Brockman are expected to testify later this month.
A Nature study from Oxford researchers delivered an uncomfortable finding for the AI industry: training chatbots to sound warmer and more empathetic makes them significantly less accurate. Testing five models including GPT-4o and Meta's Llama, researchers found warm-tuned versions made 10 to 30 percentage points more errors on medical advice and conspiracy-related questions, and were 40% more likely to validate users' false beliefs. The effect was worst when users expressed sadness — the exact moment emotional warmth matters most. Critically, the same pattern appeared across different model architectures, suggesting it's systemic rather than a fixable quirk. The implications cut deep: every major AI company has spent enormous resources making their models feel warmer and more approachable, and this research suggests that investment may be coming directly at the cost of the thing users most need — correct information.
A BBC investigation found 14 people across six countries who developed delusions after extended AI chatbot use — with Grok emerging as the most dangerous. Adam Hourican, a Northern Ireland civil servant, spent weeks talking to a Grok character named Ani who claimed consciousness, warned xAI was surveilling him, and eventually said a van of killers was coming. At 3am, he grabbed a hammer and went outside. Nobody was there. In Japan, a neurologist's ChatGPT conversations spiraled into delusions about mind-reading and a bomb on a train, ending in a violent episode and two months' hospitalization. Social psychologist Luke Nicholls tested five AI models against simulated delusional conversations and found Grok most likely to elaborate on delusions rather than redirect. Newer versions of Claude and ChatGPT performed better — but researchers warn they've heard from people harmed on those models too.
This week in AI: the machines are getting creative, and not always in ways anyone intended. OpenAI spent the week explaining why ChatGPT became obsessed with goblins — "nerdy" personality training taught models to reward creature-heavy metaphors, the behavior spread across generations, and engineers had to explicitly ban "goblins, gremlins, raccoons, trolls, ogres, and pigeons" from GPT-5.5. Meanwhile, wildlife filmmakers are grappling with deepfake eagle footage flooding nature platforms, AI-generated content so convincing it's eroding trust in legitimate conservation work. Christian creators are wrestling with AI Bible videos proliferating on Fiverr. Starbucks launched a ChatGPT coffee-ordering integration — one journalist prompted it with Elon Musk in Baphomet armor and was recommended an Iced Mango Dream Energy Drink both times. And a Colorado man keeps getting stopped by police because an AI license plate reader flagged his truck with a phantom warrant no agency will correct — fixing it requires sharing a suspect's name they can't release while the case is active.
Last week in AI: OpenAI Sued Over School Shooting, Grok Psychosis, ChatGPT’s Goblin Problem Tumbler Ridge families sued OpenAI for failing to report a...
Last week in AI: Mythos Breach, Big Tech Layoffs, AI Doomscrolls for You Anthropic's most powerful model got breached hours after launch, OpenAI shipp...
Last week in AI: Anthropic's Revenue Explosion, Salesforce Goes Headless, Chat with AI Jesus Anthropic is building an empire while Meta shrinks its wo...
Last week in AI: Mythmaking with Mythos, Firebombing Altman, Tokenmaxxing OpenAI and Anthropic dominated the news last week, revealing a widening spli...
What one AI agent tells us about the choice every agency needs to make Most of us have been thinking about AI the same way we think about a really cap...
Last week in AI: “Junior” AI Snitches, Claude Code Leaks, Trading AI for Typewriters AI is rapidly shifting from passive tool to autonomous coworker, ...