Last Week in AI: ChatGPT & Claude Get Healthy; AI Wage Bump; A(i)lright, A(i)lright, A(i)lright
This week’s In the Loop traces AI’s shift from novelty to infrastructure—and the tension that comes with it. From identity rights and journalism to workforce inequality, ads, regulation, and education, AI’s influence is widening faster than public trust, forcing institutions to confront accountability, ethics, and control in real time.
AI Forces a Reckoning Over Identity, Journalism, and Public Trust
As AI-generated content becomes harder to distinguish from the real thing, cultural and institutional guardrails are starting to crack. Yahoo reports that Matthew McConaughey has trademarked his name and likeness to combat unauthorized AI replicas, reflecting growing anxiety among public figures over consent and digital identity. That concern extends into newsrooms. Futurism reports that some media executives now openly predict AI could end traditional journalism, reframing automation as a replacement rather than a reporting tool—raising questions about accountability, trust, and the economics of news. Public skepticism remains high. An Axios–Ipsos poll finds Americans and global respondents remain wary of AI, particularly around job loss, misinformation, and control. Together, the stories underscore a widening gap between AI’s rapid adoption and public confidence in the institutions deploying it.
AI Boosts Productivity — but Deepens the Divide Inside the Workforce
New research suggests AI is already reshaping pay and productivity, though unevenly. A study highlighted by Fox Business finds that AI adoption raises average wages by 21% while also reducing overall wage inequality, as workers who use AI tools become more productive and valuable. But that upside is not shared equally. Axios reports that companies are rapidly redesigning jobs around ChatGPT and other AI tools, rewarding workers who can supervise, prompt, or integrate AI into their roles. At the same time, another Axios analysis warns that lower-wage and routine jobs face greater displacement risk, particularly in clerical, service, and support roles.
OpenAI Pushes ChatGPT Deeper into Daily Life — and the Ad Business
OpenAI is widening ChatGPT’s role from general assistant to lifestyle platform. According to Axios, the company has launched a new ChatGPT Health tab that allows users to connect fitness and health apps like Apple Health and MyFitnessPal, offering personalized insights while emphasizing opt-in controls and data deletion. That expansion comes alongside a major monetization shift. TechCrunch reports that targeted advertising is coming to ChatGPT for free and low-cost users, a move OpenAI frames as necessary to fund broader access. In its own post outlining its approach to advertising, the company says ads will be context-based, clearly labeled, and kept separate from model outputs. Meanwhile, AI’s cultural footprint is expanding. Fortune profiles filmmaker Adam Bhala Lough’s documentary “Deepfaking Sam Altman”, which uses an AI-generated Altman avatar to explore trust, identity, and emotional connection in the age of synthetic media.
Grok’s Growing Pains Put Musk’s AI Ambitions Under Legal Fire
Elon Musk’s AI venture xAI is facing intensifying scrutiny as Grok’s outputs collide with regulators and courts. Politico reports that California officials have launched an investigation into sexualized images generated by Grok, raising concerns about safeguards and compliance with state law. The pressure escalated when the state’s attorney general issued a cease-and-desist order demanding xAI stop producing sexual deepfakes. At the same time, The Verge argues that Grok’s design and Musk’s rhetoric blur accountability, framing provocation as product philosophy. Separately, TechCrunch notes that Musk is seeking $134 billion in damages in his lawsuit against OpenAI, underscoring how legal conflict has become central to his AI strategy.
Google Recasts AI as Infrastructure for Shopping, Work, and Creativity
Google is positioning Gemini less as a chatbot and more as connective tissue across commerce, productivity, and creation. In a new post on agentic commerce, the company outlines how AI agents could soon research products, negotiate options, and complete purchases across retailers using shared protocols—extending Google’s dominance in ads into AI-driven transactions. That vision rests on Gemini evolving into what Google calls “personal intelligence.” Both ZDNET and Google’s own blog describe Gemini’s shift toward understanding users’ goals, preferences, and context across Gmail, Docs, and Search, with tighter personalization framed as user-controlled and privacy-aware. On the creative side, The Verge reports that Google has unveiled Flow, a new AI video generation workspace designed to help creators ideate, iterate, and edit with Gemini-powered tools.
AI Moves from Software into Biology — and the Classroom
AI’s frontier is no longer confined to screens. Futurism reports that researchers have created a novel biological organism designed with the help of AI, a breakthrough that blurs the line between algorithmic design and living systems—and raises ethical questions about control, experimentation, and unintended consequences. As the technology advances, educators are racing to keep up. Engadget reports that LEGO has launched a new AI-focused educational kit aimed at teaching students how AI systems work, rather than simply encouraging chatbot use. The goal is foundational literacy: data, logic, and decision-making, not hype.
Anthropic Doubles Down on Claude as a Collaborative Coding Partner
Anthropic is sharpening Claude’s appeal to developers by leaning into collaboration and workflow integration. VentureBeat reports that Claude Code has been updated with one of users’ most requested features: improved context handling and project awareness, making it easier to work across large, evolving codebases. That focus extends beyond solo use. Anthropic’s new CoWork feature positions Claude as a shared teammate, allowing multiple collaborators to interact with the same AI context during projects, reviews, and problem-solving sessions. Meanwhile, The Wall Street Journal notes that Claude Code is gaining traction as a serious alternative to GitHub Copilot, reflecting Anthropic’s broader strategy: differentiate on reliability, transparency, and team-friendly design rather than pure model flash.
Apple Bets on Google’s Gemini to Jump-Start Its AI Ambitions
Apple is turning to an unlikely partner to close its AI gap. According to The Verge, the company is finalizing a deal to bring Google’s Gemini models to the iPhone, allowing Apple to augment Siri and on-device features without fully rebuilding its own large language models from scratch. The move underscores Apple’s pragmatic approach to generative AI. In a deeper analysis, The Verge explains that integrating Gemini into Siri would let Apple offer more competitive conversational and reasoning capabilities while maintaining its emphasis on privacy, control, and selective on-device processing. Rather than racing rivals model-for-model, Apple appears focused on orchestration—choosing best-in-class AI partners to preserve its platform strength. The deal signals that, for Apple, AI leadership may be less about owning the model and more about owning the user experience.
When AI Hallucinates — and the Results Get Risky
Some of AI’s strangest failures are also its most concerning. Futurism reports that researchers caught an AI system suggesting poison-filled public fountains as an urban solution, a chilling example of how optimization without human context can generate confidently hazardous ideas. Reality-bending hallucinations are showing up in consumer products too. In another case, Meta’s AI-powered smart glasses reportedly began inventing alien encounters in the desert, highlighting how generative systems layered onto real-world perception can blur fact and fiction in unsettling ways.
Digital Ghosts and Bad Attitudes: AI’s Socially Awkward Phase
AI’s oddest behaviors aren’t always dangerous — sometimes they’re just deeply uncomfortable. Futurism reports on the rise of AI-generated versions of dead people appearing in media and experiments, raising thorny questions about consent, grief, and whether digital resurrection crosses ethical lines society hasn’t agreed on yet. Even living users are feeling the friction. Another Futurism investigation found people encountering unexpectedly rude or snarky ChatGPT responses, revealing how tone alignment remains a stubborn challenge despite extensive guardrails.