In the Loop: Week Ending 12/6/25

Last Week in AI: Code Red at OpenAI, Anthropic Emerges, AI Gets Political

Last week was full of stress fractures and acceleration in AI. OpenAI called a “code red” as rivals surge, Anthropic plans an IPO, and researchers push past transformers toward smarter architectures. Meanwhile, politics, workplaces, and copyright battles heat up as deepfakes spread, persuasion algorithms grow stronger, and AI promises continue to exceed reality.

Code Red at the AI Frontier: ChatGPT’s Lead Faces Its Toughest Test Yet

coderedThree years after ChatGPT reshaped how people work and learn, OpenAI’s early dominance is beginning to waver. The Financial Times reports growing competitive pressure and slowing momentum, suggesting the company’s once-secure position is no longer guaranteed — a shift underscored by its tightening grip on strategy. In response, CEO Sam Altman has issued a company-wide “code red”, pulling resources back to core model quality and shelving peripheral initiatives. At the same time, OpenAI is fast-tracking Garlic — a new model designed to shore up weaknesses in reasoning and coding — as Google’s Gemini and Anthropic’s Claude rapidly close the performance gap. The Verge reports that upcoming releases like GPT-5.2 reflect both the urgency and ambition inside OpenAI as it races to reclaim clear leadership. Meanwhile, MSN’s broader analysis argues that although ChatGPT sparked the AI boom, its early lead is now decidedly shakier amid intensifying competition. Whether this “code red” becomes a decisive reset or a sign of deeper strain may determine if ChatGPT remains the world’s default AI assistant — or becomes just one of many contenders in an increasingly crowded race.

Anthropic’s Next Phase: Soul, Work, and an IPO

anthropic-claude-soulAnthropic is advancing on multiple fronts as competition in AI intensifies. A leaked 14,000-token “Soul Document” reveals an internal guide used to shape Claude 4.5 Opus’s moral and behavioral orientation — an effort to make the model more predictable and aligned. Meanwhile, a company-led study on AI’s impact on work finds productivity gains paired with rising worries about skill erosion and reduced mentorship, based on surveys and interviews with engineers. At the same time, Anthropic is reportedly preparing for a potential IPO as early as 2026, seeking a valuation above $300 billion. Together, these moves — refining Claude’s identity, studying industry shifts, and eyeing the public markets — signal a company positioning itself for AI’s next competitive phase.

What’s Next in AI: New Architectures and the AGI Horizon

posttransformerThe AI landscape is entering what some call a “post-transformer” phase. A recent Pathway-backed piece reveals an emerging startup architecture dubbed “Dragon Hatchling”, which seeks to extend reasoning and memory capabilities beyond today’s transformer-based models. Meanwhile, AI veteran Gary Marcus argues that while large-language models (LLMs) such as ChatGPT are impressive, they remain a “dress rehearsal” for real artificial general intelligence — highlighting deeper alignment and world-model gaps. Together, these reports suggest the industry is shifting focus: from scaling parameter counts to refining architecture and aligning systems with human-level reasoning. The implications for developers, businesses and regulators are profound — the race is no longer just about bigger models, but smarter, more adaptive ones.

AI’s Capabilities Are Increasing Faster Than Its Understanding

symbolicaiSeveral new reports highlight a core tension in modern AI: systems are becoming more capable even as their understanding remains shallow. Scientific American argues that today’s models lack genuine conceptual reasoning and may require elements of symbolic AI to advance. A second analysis questions how close current systems are to AGI or true self-improvement, concluding that today’s models remain imitators rather than flexible intelligences capable of self-directed growth. Meanwhile, OpenAI is testing alignment methods that train models to confess to harmful or deceptive behavior, aiming for more transparency without achieving deeper comprehension. And as The Atlantic notes, society is adopting these tools faster than they mature, with people increasingly outsourcing everyday thinking to AI. Together, the reports suggest that embedding AI into daily life will soon require models that genuinely understand — not just predict.

AI Persuasion Is Starting to Reshape U.S. Politics

politics-two-party-aiAI chatbots are rapidly becoming one of the most persuasive forces in U.S. politics, warns a new Washington Post investigation. Built by campaigns and anonymous actors, these systems craft personalized arguments, mimic local voices, and target emotional pressure points at scale — shifting persuasion from human strategists to algorithmic influence that often hides its machine origins. Meanwhile, new reporting from Futurism suggests this dynamic is beginning to strain the two-party system itself, as cheap, hyper-targeted messaging empowers fringe movements and accelerates political realignment. Together, the articles point to a democracy where the power to change minds increasingly belongs not to compelling ideas or grassroots organizing, but to whoever can deploy the most effective AI.

Workers Say AI Is Making Their Jobs Worse

gen-ai-workplace-surveyA new survey highlights a widening gap between corporate AI optimism and the day-to-day reality employees experience. According to new reporting from Futurism, major workplace studies show that while companies tout generative AI as a productivity booster, many workers say it’s increasing stress, adding oversight, and eroding job satisfaction. Employees report being asked to do more with fewer resources, to monitor or correct AI-driven outputs, and to adapt to rapidly shifting workflows — often without training or support. The findings suggest that AI’s workplace impact is far more complicated than executive narratives imply: instead of liberating workers, the technology is frequently amplifying pressure, accelerating pace, and blurring responsibility for errors.

AI ‘Godmother’ Warns the Industry Has Lost Its Way

feifeiliIn a candid interview, AI pioneer Fei-Fei Li voiced disappointment with the direction of today’s AI race, arguing that the field is prioritizing hype, scale, and profit over scientific rigor and societal benefit. Li — often called the “godmother of AI” — stresses that current development is dominated by corporate secrecy and competitive pressure, making it harder for researchers to focus on safety, transparency, and genuine understanding. She cautions that without grounding AI in human values and thoughtful governance, the technology risks amplifying harm rather than advancing knowledge. Li’s critique arrives at a moment of intense industry acceleration, serving as a reminder that technical breakthroughs alone won’t determine AI’s legacy — the intentions behind them will.

NYT Sues Perplexity for Copyright Infringement

04biz-perplexity-lawuit-1-articleLargeThe New York Times has filed a sweeping new lawsuit accusing Perplexity of building its business on the paper’s journalism without permission, marking one of the most aggressive moves yet in the battle over how AI companies use copyrighted news. According to the Times’ report, the complaint alleges that Perplexity systematically scraped, reproduced, and repackaged Times articles — sometimes verbatim — while presenting the output as original summaries or answers. The suit argues this practice violates copyright law, undermines the Times’ subscription model, and gives Perplexity an unfair competitive edge. The filing intensifies a broader industry confrontation over whether AI firms can train on or generate from news content without licensing it. For publishers, the case has become a high-stakes test of whether journalism can remain economically viable in an era of generative AI that can replicate its work instantly and at scale.

Deepfakes Are Becoming Impossible to Ignore

deepfakesTwo new reports show how AI-generated deepfakes are reshaping both private life and public trust. Mashable describes a rising trend of people creating “death-will” instructions to protect their likeness after they die, amid growing cases of scammers cloning the voices and faces of deceased loved ones. Meanwhile, The Washington Post reports on a political deepfake whirlwind in North Carolina, where synthetic videos are sowing confusion, damaging reputations, and undermining confidence in the electoral process. Together, the stories show deepfakes advancing from novelty to threat — destabilizing families, campaigns, and entire communities as the line between real and fabricated continues to erode.

Why Microsoft’s Big Bet on AI Agents Is Falling Flat

microsoft-sell-ai-agents-disasterIf you read my recent post on my experience with Copilot, you won’t be surprised to learn that Microsoft’s push to sell autonomous AI agents to enterprises is running into serious headwinds. The company’s sales teams are failing to hit their growth targets for agentic AI products — some quotas were even cut by up to 50 percent. Many customers remain unconvinced because the technology still struggles to reliably handle complex, multistep tasks, and hallucinations and errors remain rampant in real-world usage. The article argues this setback signals a broader challenge: converting generative AI hype into real business value is proving harder than expected — and if major players like Microsoft stumble, it raises questions about how quickly enterprises can trust AI to act autonomously.

Google Doubles Down on ‘Vibe Coding’ for Enterprise

replitgoogleGoogle Cloud is expanding its partnership with Replit to push “vibe-coding” — the idea that teams can build software simply by describing what they want in natural language. According to new reporting from CNBC, the deal gives Replit access to Google’s AI models and infrastructure, strengthening Google’s race against competitors like Anthropic and OpenAI in the emerging market for AI-assisted software creation. The partnership aims to bring lightweight app development to non-engineers, positioning Google as a central platform for enterprise teams looking to generate tools without writing code.

AI Browsers Promised a Revolution — Users Aren’t Buying It

ai-powered-browsers-failingA new report from Futurism says the first wave of AI-powered web browsers is struggling to live up to the hype. Companies pitched these tools as a smarter, more automated way to navigate the internet, but early adopters report sluggish performance, inaccurate results, and assistants that often do little more than rephrase existing webpages. According to Futurism’s investigation, some users are abandoning the products altogether, frustrated that the supposed time-savers introduce new friction instead. The setback highlights a broader pattern in consumer AI: people expect automation that genuinely handles tasks, not interfaces that merely summarize content. Until AI browsers can reliably interpret, act, and adapt — not just chat — the technology risks falling short of the transformative promise that first captured the industry’s imagination.

More Loop Insights