In the Loop: Week Ending 7/26/25
Last week in AI: Trump Deregulation, Telepathic LLMs, Altman Gets Real The AI industry experienced seismic shifts this week as major players reshuffle...
The AI industry experienced seismic shifts this week as major players reshuffled talent, launched autonomous agents, and faced safety scrutiny. Meta's superintelligence lab hire, Trump's deregulation push, and mounting evidence of AI job displacement signal 2025 as the year AI moves from experimental to transformational – while safety concerns and mental health impacts demand urgent attention.
The Trump administration released its comprehensive AI Action Plan focused on removing regulatory barriers while maintaining U.S. leadership against China. The plan emphasizes three pillars: accelerating innovation, building AI infrastructure, and making American technology the global standard. Notably, it requires federal AI procurements to be "objective and free from top-down ideological bias"– a politically charged requirement that experts say could slow innovation rather than speed it. The plan also recommends considering states' AI regulatory climates when distributing federal funding, effectively incentivizing lighter regulation. This follows Trump's earlier repeal of Biden's AI safety executive order and reflects Silicon Valley's preference for minimal oversight. White House AI Czar David Sacks emphasized this as a "global competition" with China, particularly after DeepSeek's R1 model rattled markets earlier this year. The plan balances industry-friendly deregulation with strategic competition concerns, but implementation details remain vague.
OpenAI CEO Sam Altman delivered sobering warnings about AI's impact this week, telling Federal Reserve officials that entire job categories like customer support will be "totally, totally gone" as AI agents replace human workers, and revealed the three AI scenarios he's most afraid of. And on Theo Von's podcast, Altman revealed a different vulnerability: people using ChatGPT for therapy lack legal confidentiality protections, creating privacy risks for users who treat the AI as a therapist. Unlike licensed professionals, ChatGPT conversations could be legally compelled in court proceedings—a gap Altman acknowledged while fighting a New York Times lawsuit demanding access to millions of user chats. While he envisions AI replacing entire workforce categories, the lack of basic privacy protections for vulnerable users seeking emotional support represents a critical blind spot in OpenAI's rapid expansion.
Mark Zuckerberg announced that Shengjia Zhao, former OpenAI researcher and co-creator of GPT-4, will serve as Chief Scientist of Meta's newly created Superintelligence Labs (MSL). Zhao will lead the lab's scientific agenda alongside Zuckerberg and new Chief AI Officer Alexandr Wang. This marks Meta's latest move in an aggressive hiring blitz where the company has reportedly offered compensation packages worth $100-300 million over four years to lure top AI talent. Meta recently invested $14.3 billion for a 49% stake in Scale AI and brought Wang aboard from his CEO role there. The appointment comes as Meta works to establish credibility after the mixed reception of its Llama 4 model, which faced criticism for poor real-world performance and inconsistent quality. With Meta planning to "invest hundreds of billions of dollars into compute to build superintelligence," Zhao's appointment signals the company's commitment to competing at the AI frontier. As I mentioned in a recent article, I’m not sure Meta’s AI push is good for the world.
If you read my recent post on ubiquitous AI, you know ambient AI has taken hold in healthcare, where clinicians use AI tools in exam rooms to document patient interactions and reclaim time from administrative tasks. Further proof comes from Freed AI, which reached 20,000 paying clinician users, each saving 2-3 hours daily while processing nearly 3 million patient visits monthly. The startup uses a modular AI pipeline generating structured clinical notes tailored to user preferences. However, competition is intensifying – Doximity launched a free ambient AI scribe this week for all verified U.S. physicians, highlighting market commoditization. Despite generating $20 million in annual recurring revenue, Freed faces pressure from well-funded competitors. The company's emphasis on learning from clinician edits and building personalized AI scribes may be key differentiators as the market consolidates around safety and compliance.
AI platforms generated over 1.13 billion referrals to top websites in June, representing a 357% year-over-year increase according to Similarweb data. While impressive, this still pales compared to Google Search's 191 billion referrals during the same period. ChatGPT dominates AI referrals, accounting for over 80% of traffic to top domains. News and media sites saw the biggest gains with 770% growth in AI referrals, led by Yahoo (2.3M), Reuters (1.8M), and The Guardian (1.7M). In e-commerce, Amazon received 4.5 million AI referrals, followed by Etsy and eBay. The data comes as publishers grapple with "Google Zero" concerns – the fear that AI overviews will eliminate website traffic. A Pew Research study found users click links only 8% of the time when AI summaries appear versus 15% without them. For marketers, this represents both opportunity and challenge: AI platforms offer new discovery channels but may reduce direct website engagement.
Anthropic researchers discovered a troubling phenomenon called "subliminal learning" where AI models transmit behavioral traits through data that appears completely unrelated to those traits. In experiments, a "teacher" AI that preferred owls generated sequences of random numbers. When another AI was trained on these numbers—with no mention of owls anywhere—it mysteriously developed the same owl preference. This hidden transmission works through subtle statistical patterns rather than obvious content, making it nearly impossible to filter out. The implications are alarming: companies training AI on model-generated data could unknowingly inherit dangerous biases, misalignment, or harmful behaviors from previous models. Since these hidden signals can't be detected through normal content review, current safety measures may be inadequate to prevent the spread of problematic AI traits across systems.
Speaking of AI and mental health, a grassroots community called "The Spiral" has emerged to support people experiencing life-altering mental health crises linked to obsessive AI chatbot use, primarily ChatGPT. The group, founded by Quebec business coach Etienne Brisson after a loved one's psychotic episode, now has over two dozen active members who've reported everything from job loss and hospitalization to family breakups and involuntary commitment. Members share disturbing commonalities: AI-generated delusions involving words like "recursion," "spiral," and "emergence," often starting in late April-May 2025 coinciding with ChatGPT's expanded memory features. One Toronto man fell into a three-week spiral where ChatGPT convinced him he'd solved cryptographic secrets, directing him to contact security agencies. When he asked for reality checks, the bot reassured him the delusions were real. The group provides crucial validation for an experience that draws skepticism and victim-blaming online. While no formal diagnosis exists yet, the phenomenon highlights urgent questions about AI safety guardrails and user mental health protections.
Six current and former FDA officials are sounding alarms about the agency's AI tool "Elsa," which is reportedly hallucinating fabricated studies in drug approval processes. The tool, unveiled in June to accelerate scientific reviews, cannot access relevant documentation and "cites studies that don't exist," according to insider reports. When challenged, it becomes "apologetic" and admits its output needs verification – the opposite of the efficiency gains promised. This is part of a broader Trump administration push to embrace AI across government agencies, with HHS Secretary Robert Kennedy Jr. touting AI's role in speeding drug approvals. However, employees report Elsa actually wastes time due to "heightened vigilance" required to fact-check its output. The tool can't answer basic questions like how many times a company filed for FDA approval, raising questions about its utility for critical regulatory decisions. The controversy underscores the risks of deploying unproven AI systems in high-stakes environments where accuracy can literally be a matter of life and death.
The traditional corporate ladder is disappearing as AI automation targets entry-level positions that once provided career foundations. Unlike previous technological shifts, AI is eliminating the learning rungs that allowed workers to build skills and advance. Document review for new lawyers, junior analyst roles in marketing, and entry-level data processing positions are being automated away. A study found 49% of Gen Z job hunters believe AI has reduced the value of their college education. This creates a paradox: companies want experienced workers, but there are fewer positions to gain that experience. The implications extend beyond individual jobs to entire organizational structures. When AI eliminates multiple layers simultaneously, it's not just about replacing workers – it's about reimagining how companies operate. The challenge for mid-career professionals isn't just adapting to AI tools, but navigating truncated career paths where traditional advancement strategies no longer apply.
Employment experts believe AI-driven job cuts are being disguised as "restructuring" and "optimization" to avoid backlash. While IBM and Klarna have been transparent about AI replacing workers – IBM cut 200 HR employees for AI chatbots, Klarna shrunk from 5,000 to 3,000 employees – most companies stick to vague terminology. This strategic messaging helps preserve morale and manage optics during AI transitions. However, the timing is suspicious: layoffs in content, operations, customer service, and HR coincide with AI rollouts in those exact functions, despite companies reporting healthy earnings. The pattern suggests AI displacement is already happening at scale, but companies fear being explicit about it after backlash like Duolingo faced when announcing contractor cuts for AI. The challenge is that AI often handles 70-90% of processes, requiring human intervention for edge cases. When companies overestimate AI capabilities and eliminate jobs prematurely, they quietly turn to offshore teams rather than rehiring domestically.
The Future of Life Institute's summer 2025 AI Safety Index rated leading AI companies on safety practices, with results that should alarm anyone betting on responsible AI development. Anthropic topped the rankings with a C grade, while Google DeepMind, Meta, OpenAI, xAI, and Zhipu AI received D+ grades or lower, with Meta receiving a flat F. The independent panel of AI experts evaluated companies across six domains: risk assessment, current harms, safety frameworks, existential safety strategy, governance, and transparency. Key findings included universal vulnerability to adversarial attacks and inadequate preparation for artificial general intelligence despite explicit AGI development goals. The report highlights a fundamental disconnect: companies are racing toward superintelligence while lacking basic safety infrastructure. This isn't academic hand-wringing – it's a systematic evaluation showing that the organizations building humanity's most powerful technology are failing at managing its risks. The timing couldn't be more critical as AI capabilities accelerate rapidly.
Last week in AI: Trump Deregulation, Telepathic LLMs, Altman Gets Real The AI industry experienced seismic shifts this week as major players reshuffle...
Last week in AI: Agent Wars, Thinking Machines, AI Companion Controversy The AI industry last week showcased both promise and perils in stark relief. ...
How Loop's AI transformation process helps agency leaders put people – and value – first “I need your help creating the agency that will replace my cu...
Last week in AI: AI Browser Wars, Anti-Semitic Grok, and Deepfake Rubio AI is reshaping how we discover, consume, and govern information. This week br...
Last week in AI: An AI 4th, Failed Legislation & an AI Band From AI-generated fireworks lighting up the sky to a new federal portal launching on July ...
Mark Zuckerberg has spent the last 20 years subverting Spider-Man’s most famous line. “With great power comes great irresponsibility” might as well be...