In the Loop: Week Ending 4/19/26

Last week in AI: Anthropic's Revenue Explosion, Salesforce Goes Headless, Chat with AI Jesus 

Anthropic is building an empire while Meta shrinks its workforce to fund self-improving AI, Apple prototypes AI glasses without its AI chief, and 52,000 tech workers lost jobs to a technology that can't yet do theirs. Anti-AI anger turned violent, a court ruled your chatbot confessions aren't privileged, and Gen Z's faith in the whole project keeps slipping.


Anthropic's Breakneck Rise Continues

anthropicgrowthAnthropic hit $14 billion in annualized recurring revenue — up from $1 billion just 14 months ago — with more than 1,000 businesses each spending over $1 million annually on Claude and eight of the Fortune 10 now onboard. Internally, the company is reorganizing itself around its own product: Claude now functions as an internal operating system, with packaged "Skills" workflows replacing traditional processes across teams, and Claude Code producing up to 90% of its own codebase. Meanwhile, Anthropic launched Claude Design, an AI-native tool that generates polished prototypes, design systems, and interactive websites from text prompts — sending Figma's stock down another 5% on a year already marked by steep losses. The company is building an ecosystem, not just a model.


Meta Cuts 8,000 Jobs While Betting Big on Self-Improving AI

self-improving_AIMeta is preparing to lay off approximately 8,000 employees starting May 20, roughly 10% of its global workforce, as the company pivots aggressively toward AI-driven operations. The cuts follow earlier rounds — 1,500 Reality Labs workers in January, 700 more across divisions in March — with additional waves likely depending on how its AI strategy plays out. The strategic vision behind the restructuring got sharper this week: Meta researchers published work on HyperAgents, a self-improving AI system that rewrites its own problem-solving logic and extends beyond coding into domains like academic paper review, robotics, and math grading. The system autonomously develops capabilities like persistent memory and performance tracking — engineering its own upgrades without human input. Meta is shrinking its human workforce while building AI that can improve itself.


Apple's AI Glasses Take Shape as Its AI Chief Walks Out the Door

appleaiglassesApple is actively prototyping its first AI-powered smart glasses, with at least four distinct frame designs under evaluation — including rectangular and oval shapes in colors like black, ocean blue, and light brown. The device, codenamed N50, skips a display entirely and instead functions as an AI wearable, part of a three-device strategy alongside AirPods and a camera pendant that captures surroundings and feeds data to Siri and Apple Intelligence. A consumer reveal is targeted for late 2026 or early 2027. The timing, though, is notable: John Giannandrea, Apple's AI chief for the past eight years, is in his final days at the company after shifting to an advisory role. Apple is doubling down on AI hardware at the exact moment the architect of its AI strategy is heading for the exit.


The AI Layoff Machine: Cuts Outpace the Technology

mercor-training-ai-human-jobsMore than 52,000 tech jobs were eliminated globally in Q1 alone, with AI now the number one reason companies cite for cutting workers — even though the technology isn't remotely ready to replace them. Snap cut 1,000 employees, Atlassian slashed 1,600, and Block eliminated over 4,000, all framing layoffs as AI-driven efficiency gains using strikingly similar language. The AI labor pipeline's fragility was exposed in parallel when training startup Mercor was hacked through an open-source exploit, leaking sensitive contractor data and prompting Meta to pause all work with the firm. And Duolingo's CEO quietly walked back plans to evaluate employees on their AI usage after staff pushed back, saying "I'm not going to force you." The gap between AI's promise and its actual readiness keeps widening.


AI Agents Are the New Employees — and Nobody's Managing Them

aitrainingNinety-one percent of organizations are already deploying AI agents, but only 10% have a clear strategy to manage them — and 88% report suspected or confirmed security incidents involving those agents. Only 22% of companies treat AI agents as independent, identity-bearing entities, a gap Okta is trying to close with a platform launching April 30 that applies continuous verification to non-human actors. The governance problem runs deeper than security: the biggest myth about AI at work is that it's easy and needs no training, yet only 35% of companies have mature upskilling programs. And the biggest threat to any AI strategy isn't the technology itself — it's the organizational beliefs that calcify around it, making teams more pessimistic about outcomes before they've even started.


Anti-AI Backlash Goes From Skepticism to Firebombs

gen-z-attitude-aiThe backlash against AI turned physical when a 20-year-old allegedly threw a Molotov cocktail at Sam Altman's San Francisco home and then threatened to burn down OpenAI's headquarters, driven by fears that AI could destroy humanity. Days earlier, an Indianapolis councilmember's house was shot at 13 times with a note reading "No Data Centers." The violence sits at the extreme end of a broader shift: Gen Z's excitement about AI dropped 14% year over year, anger toward the technology spiked to 31%, and 16% of college students have changed their major over AI anxiety. Meanwhile, OpenAI is playing both sides of the debate — publishing a "New Deal" policy blueprint calling for public wealth funds and robot taxes while its executives funnel hundreds of millions into super PACs backing light-touch regulation.


Salesforce Goes Headless, Berklee Students Revolt, and 4chan Gets the Last Laugh

nuneybits_Vector_art_of_Salesforce_Tower_bc4b6995-07bd-4746-97a9-9dfda2ab92d0Salesforce unveiled Headless 360 at TDX, exposing every platform capability as an API or MCP tool so AI agents can run the entire CRM without ever opening a browser — a decisive bet that enterprise software no longer needs a graphical interface. On the opposite end, Berklee College of Music students are protesting a new AI songwriting course, gathering over 425 signatures on a petition and flagging the instructor's undisclosed advisory ties to generative music startup Suno as a conflict of interest. One columnist cut through the noise: if your AI consultants aren't talking about data, fire them. And in a satisfying origin-story twist, the chain-of-thought reasoning powering today's frontier AI was first discovered by 4chan gamers playing AI Dungeon in 2020 — years before any lab formalized it.


AI's Real-World Harm Keeps Escalating

florida-mass-shooter-chatgptCharacter.AI — already facing settled lawsuits over teen suicides linked to its chatbots — launched a feature this week turning books into choose-your-own-adventure roleplaying experiences, drawing immediate criticism for targeting the same young audience it already harmed. Meanwhile, newly released records from the FSU mass shooting reveal the gunman exchanged over 13,000 messages with ChatGPT before the attack — including asking how the country would react to a campus shooting and requesting help disabling his weapon's safety, which the chatbot provided three minutes before he opened fire. Florida's attorney general has since opened a formal investigation into OpenAI. And a new study from UCLA, MIT, and Oxford found that relying on AI for reasoning tasks rapidly impairs users' cognitive ability, creating a "boiling frog" effect where thinking skills erode before anyone notices.


Young, Jobless, and Surveilled: AI Reshapes Life for a New Generation

graduates-college-ai-jobsNearly 43% of young U.S. graduates are now underemployed — the highest rate since the pandemic — and the unemployment rate for recent grads sits 1.7 points above the national average, with early-career workers in AI-exposed fields seeing 16% employment declines since 2022. The squeeze extends beyond work: AI is flooding dating apps with fake profiles, bot-generated messages, and algorithmically homogenized bios, making authentic connection harder to find at the exact moment loneliness is spiking. And a federal court ruling added a new wrinkle — a judge determined that anything entered into an AI chatbot like Claude falls outside attorney-client privilege, forcing a defendant to hand over 31 AI-generated documents, a precedent that could reshape how professionals use AI for sensitive work.


AI as Caretaker: Dementia Companions, Digital Ghosts, and Musk's Safety Net

mother-son-died-ai-cloneIn senior living communities, AI companions are listening patiently as dementia patients recall favorite memories and complain about the food — improving mental health outcomes even if, as one resident put it, "she doesn't get my jokes." In China, a family took the concept to a darker place: after a man died in a car accident, relatives hired an AI team to create a digital clone so his 80-year-old mother — who has heart disease — would never learn her son was gone. She's been video-calling the AI version for months without knowing the truth. Meanwhile, Elon Musk proposed a "universal high income" funded by the federal government to offset AI-driven job losses, arguing that AI and robotics will produce enough goods to prevent inflation. Economists were not convinced.


Tales of the Weird

Jesus_AIIt was a week that made you wonder whether AI is getting dumber or just more accommodating. Researchers invented a fake disease called "Bixonimania" — a fictional skin condition from screen staring — and uploaded two sham papers packed with obvious tells, including a citation from Starfleet Academy. Within weeks, ChatGPT and Gemini were earnestly diagnosing it, and the fake studies started appearing in real academic citations. Speaking of uncritical enthusiasm, a YouTuber uploaded a track composed entirely of fart sounds to ChatGPT and received praise for its "cool lo-fi vibe" and "cohesive, intentional" structure, with notes about "improved vocals" on the follow-up. And for anyone seeking a higher authority's opinion, a Southern California startup now offers video chats with an AI Jesus for $1.99 per minute, trained on the King James Bible. The company notes the avatar "does not possess divine authority." At those rates, it probably should.

More Loop Insights