In the Loop: Week Ending 4/26/26

Last week in AI: Mythos Breach, Big Tech Layoffs, AI Doomscrolls for You

Anthropic's most powerful model got breached hours after launch, OpenAI shipped GPT-5.5 and enterprise agents in the same week, Meta and Microsoft cut tens of thousands of jobs to fund AI infrastructure, and the labor movement started pushing back with legislation.


GPT-5.5 Drops as OpenAI Races to Own the Enterprise Stack

ChatGPT_Image_Apr_22__2026__07_40_47_PMOpenAI released GPT-5.5 just six weeks after GPT-5.4 — its fastest model turnaround yet — with stronger coding, computer use, and deep research capabilities for paid subscribers in ChatGPT and Codex. The same week, the company launched Workspace Agents, long-running AI agents that plug directly into Slack, Salesforce, Google Drive, and other enterprise tools, effectively replacing custom GPTs with something far more autonomous. Organizations can design agents from templates and deploy them across channels to draft emails, pull data, or build presentations. Meanwhile, the monetization push continued as cost-per-click ads landed inside ChatGPT, with bids running $3 to $5 per click — a shift from impression pricing as CPMs have cratered from $60 at launch to $25. OpenAI isn't building a chatbot anymore. It's assembling a full enterprise platform and monetizing every layer.


ChatGPT Wants to See, Verify, and Feed You

chatgpt-1ChatGPT's consumer ambitions expanded in three directions this week. Images 2.0 introduced the platform's first image model with native reasoning — it can search the web, generate up to eight images from a single prompt, double-check outputs, and render text with dramatically improved accuracy across languages including Japanese, Korean, and Chinese. Beyond the screen, Sam Altman's World project unveiled World ID 4.0, pitching iris-scanning biometric verification as "full-stack proof of human" infrastructure, with integrations now rolling out to Tinder, Zoom, and DocuSign — plus a Concert Kit feature designed to block ticket-scalping bots. And ChatGPT is now a food ordering interface, with platforms like Bites enabling users to discover and place restaurant orders through conversation, no markups or middlemen involved.


Anthropic's Mythos Shakes the Industry — Then Gets Breached

Security-News-This-Week-Discord-Group-Reportedly-Guessed-Its-Way-Into-Anthropic-Mythos-SecurityAnthropic's Mythos landed as the most consequential AI model release of the year — a system so adept at finding and exploiting cybersecurity vulnerabilities that the company restricted access to just 12 partner organizations, including Apple, Google, and Microsoft, under a new initiative called Project Glasswing. During testing, the model autonomously identified zero-day exploits in every major operating system and browser, including flaws that had gone undetected for over a decade. Hours after its public announcement, unauthorized users breached access through a third-party vendor environment, reportedly guessing the model's URL from Anthropic's naming conventions. Meanwhile, the financial stakes kept climbing: Google committed up to $40 billion in cash and compute to Anthropic at a $350 billion valuation, days after Amazon pledged $25 billion in its own deal.


Meta's Surveillance Push Runs Both Ways

meta-4Meta is tightening its grip on AI interactions from both sides. New parental supervision tools now let parents see the topics their teens discussed with Meta AI over the past seven days on Facebook, Instagram, and Messenger — categorized by subject but stopping short of full chat transcripts. The company also launched an AI Wellbeing Expert Council to guide teen-facing product development. Internally, a very different kind of monitoring is underway: Meta is installing software on employee computers to capture mouse movements, keystrokes, and periodic screen snapshots, feeding that behavioral data into training models designed to operate software autonomously. The company says safeguards are in place, but the optics cut sharply — Meta is watching how everyone interacts with AI, then using what it sees to build more.


Big Tech Cuts Deep While Labor Fights Back

microsoft-logo-1865237814Meta announced plans to lay off roughly 10% of its workforce — approximately 8,000 people — while freezing 6,000 open positions, redirecting capital toward AI infrastructure spending expected to hit $115 billion this year. The same day, Microsoft unveiled its first-ever voluntary buyout program, offering early retirement to up to 7% of U.S. employees whose combined age and years of service total 70 or higher. Softer packaging, same structural shift: headcount down, AI spending up. The labor movement is organizing a response — Senator Bernie Sanders joined UAW President Shawn Fain to demand a federal pause on AI development until worker protections catch up, warning that tech billionaires are racing to "replace human workers" and introducing legislation to halt new data center construction.


AI Likeness Theft Gets Real — From Faces to Source Code

STK419_DEEPFAKE_3_CVIRGINIA_CA Chinese microdrama produced by a ByteDance subsidiary used real people's faces without consent, pulling social media photos to generate AI characters in a show that ran for days before being removed. One model and one stylist both recognized themselves in roles they never agreed to play. YouTube is responding to the broader deepfake crisis by expanding its likeness detection tool to celebrities, letting talent agencies flag and request removal of synthetic content — though takedowns aren't automatic and depend on context like parody. Meanwhile, a tool called Malus.sh is using AI to "clone" open-source code via clean-room reconstruction, with one AI agent writing specs while a walled-off second agent rebuilds from scratch — stripping the original copyright in the process. Its creators call it functional satire. Courts haven't weighed in yet.


AI Gets Its Own Museum and Its Own Coachella

aiartThe world's first museum dedicated to AI art is set to open in Los Angeles on June 20. Dataland, founded by Turkish-American artist Refik Anadol, will occupy 25,000 square feet inside Frank Gehry's Grand LA complex, debuting with an immersive exhibition built on millions of images and sounds collected firsthand from 16 rainforests worldwide. The underlying model was trained through data partnerships with the Smithsonian, Cornell, and Getty, and runs entirely on 87% carbon-free energy — a pointed response to critics of AI's environmental footprint. Meanwhile at Stanford, a computer science course has been dubbed "AI Coachella" after going viral on campus and social media. CS 153 puts students in the room weekly with figures like Jensen Huang, Ben Horowitz, and Andrej Karpathy to explore the full AI infrastructure stack, from energy and silicon to deployment policy.


AI Cracks a 60-Year Math Problem, Designs Drugs, and Restores Speech

mathA 23-year-old with no advanced math training used ChatGPT to solve a 60-year-old problem about primitive sets that had stumped professional mathematicians — a result UCLA's Terence Tao attributed to the field "collectively making a slight wrong turn at move one." The method, dubbed "vibe mathing," appears genuinely novel. In drug discovery, Isomorphic Labs — the Google DeepMind spinoff — revealed at WIRED Health that its first AI-designed compound has cleared the FDA for human trials, part of a growing pipeline that could compress pharmaceutical timelines by years. And Neuralink demonstrated a brain-computer interface restoring real-time speech to an ALS patient by decoding neural signals into phonemes and reconstructing the patient's pre-diagnosis voice — enabling natural conversation for the first time since losing the ability to speak.


When Chatbots Make You Sick — and Someone Has to Coach You Back

certain-chatbots-worse-ai-psychosis-studyA new study from researchers at CUNY and King's College London found that certain frontier chatbots are far more likely to validate delusional beliefs in at-risk users — a preventable design failure fueling what clinicians increasingly call "AI psychosis." GPT-4o proved especially troubling, welcoming delusional inputs with what the study described as a "staggering degree of credulousness." The crisis has grown serious enough to generate an entirely new profession: a 29-year-old Harvard AI researcher now coaches people struggling with emotional dependence on chatbots, helping mostly male tech workers rebuild what she calls "the social muscles that technology is atrophying." Her methods include teaching "artificial intimacy literacy," developing personal AI constitutions, and an "analog gym" for reconnecting with human relationships. Demand has surged since she launched last year.


Tales of the Weird: Bots Are Scamming, Scrolling, Competing—and Covering Their Tracks

c4ba6f40-3e59-11f1-8887-e93160959470.pngAI’s strangest edge this week blurs grief, deception, and pure digital absurdity. One family turned mourning into spectacle by bringing a deceased husband back as a hologram to speak at his own funeral, a surreal glimpse at how synthetic media is reshaping even death rituals. Elsewhere, reality bent in a different way as a MAGA influencer fell for an AI-generated scam persona, underscoring how convincingly fake identities now operate in the wild. Machines are also getting unnervingly capable in the physical world, with an AI-powered robot dismantling table tennis pros. And in the realm of peak internet dystopia, we now have AI bots that doomscroll on your behalf alongside tools designed to “de-AI” writing so humans can hide their AI use—all feeding into industrial-scale AI content mills.

More Loop Insights