In the Loop: Week Ending 12/13/25

Last Week in AI: Big Deals, Monetizing AI, Digital IDs

Last week saw AI shift from novelty to infrastructure: collective power replacing lone heroes, standards like MCP promising interoperability, governments centralizing regulation, and deals cementing distribution advantages. Monetization, everyday reliance, mounting errors, and rising cyber and governance risks underscore a central tension—AI’s impact is accelerating faster than our ability to control it

Time’s “Person” of the Year: The AI Architects

time-person-of-the-year-2025-01Time named a group of leading technologists as its 2025 Person of the Year: the AI Architects, recognizing the executives, researchers, and engineers shaping how artificial intelligence is built and deployed worldwide. The designation reflects a shift away from celebrating a single individual toward acknowledging the collective influence of figures behind systems like ChatGPT, Gemini, and other foundational models that are rapidly transforming work, culture, and geopolitics. Time argues that these architects now wield outsized power, designing tools that affect billions while raising unresolved questions about accountability, safety, and who ultimately controls AI’s direction.

Trump Orders Development of a Single National AI Regulatory Framework

legislationPresident Trump has signed an executive order calling for a single national framework to regulate artificial intelligence, aiming to replace the growing patchwork of state-level AI laws with a unified federal approach. According to CNBC, the order directs federal agencies to develop consistent standards for AI safety, transparency, and use, while limiting states’ ability to impose their own regulations. The administration argues that a national framework will reduce compliance costs, encourage innovation, and give U.S. companies clearer rules as they compete globally. Critics warn that preempting state laws could weaken consumer protections and slow responses to emerging risks. The move marks a significant shift toward centralized AI governance and sets up a broader debate over who should shape AI policy in the U.S.

AI Deals Signal a New Phase of Power and Control

STK157_Disney_01-1This week’s AI dealmaking shows generative models becoming core infrastructure across government, enterprise, and entertainment. According to Bloomberg, the U.S. Department of Defense has selected Google’s AI platform to power GenAI.mil, extending generative AI tools to millions of Pentagon employees and marking one of the largest government deployments of commercial AI to date—underscoring Washington’s growing reliance on private-sector models. At the same time, The Verge reports that Disney and OpenAI have struck a broad agreement allowing Disney characters from Marvel, Star Wars, and Pixar to appear in OpenAI’s Sora and ChatGPT image tools, while The Wall Street Journal reports that Anthropic and Accenture have partnered to bring Claude models to large enterprise clients. Together, the deals signal a clear shift: as AI matures, advantage increasingly depends on distribution, trusted relationships, and scale—not just model performance.

OpenAI Updates Reveal Power Gains—and New Fault Lines

openai-1Recent reporting suggests OpenAI’s rapid pace is delivering real productivity gains while sharpening concerns about control and risk. Bloomberg reports that internal research shows ChatGPT is saving workers nearly an hour a day on average, helping explain its accelerating adoption and OpenAI’s push to release new models faster. That speed has created internal strain: Futurism reports that details have emerged around moments of panic involving CEO Sam Altman, reflecting pressure from competition, governance challenges, and rising expectations that OpenAI must scale without failure. The wider industry is responding in kind. WIRED describes OpenAI’s latest launches as an expanding “code red” moment for AI rivals racing to keep up, while Axios warns that newer OpenAI models are raising cybersecurity risks, particularly around automated exploitation. Cultural scrutiny is growing as well: The Atlantic argues that ChatGPT projects a self-serving optimism about its impact—an outlook that may smooth adoption even as unresolved risks persist.

AI’s Next Act: Turning Intelligence into Revenue

chat_ad-1200x675As generative AI matures, the focus is shifting from capability to cash flow. Seeking Alpha reports that Google is moving to monetize Gemini through advertising, signaling that AI assistants are becoming new surfaces for ads rather than stand-alone subscription products. The strategy mirrors Google’s core business: integrating ads directly into AI-generated responses and experiences, potentially redefining search and raising questions about how commercial incentives will shape what users see—and trust. OpenAI is navigating that same tension from the opposite direction. According to The Decoder, the company insists that ChatGPT’s shopping suggestions should not be viewed as advertising, even as they guide users toward specific products and merchants. OpenAI argues the recommendations are utility-driven, not paid placements, underscoring a careful attempt to preserve credibility while still exploring paths to monetization. Together, the moves highlight a central challenge for AI companies: finding sustainable revenue models without undermining user trust as assistants evolve into powerful intermediaries between consumers and the marketplace.

AI Errors Keep Slipping into the Spotlight

mcdonalds-ai-generated-commercialA new round of misfires shows how AI’s expanding role in media, research, and consumer products remains plagued by reliability gaps. Futurism reports that McDonald’s recently aired an AI-generated commercial that drew attention for its awkward visuals and odd details, highlighting how brands rushing to adopt generative tools risk embarrassment when quality control falters. The issue runs deeper. Semafor found that The Washington Post’s AI-generated podcasts were riddled with factual errors and fabricated quotes, while Futurism reports that a surge of AI-generated “slop” papers is overwhelming academic journals and reviewers. Consumer tech faces similar concerns: NBC News warns that AI-powered toys marketed to children can produce misleading or inappropriate responses. Together, the stories underscore a persistent problem: even as AI becomes more visible, basic errors remain one of its most consequential weaknesses.

AI Use Expands Across Work, School, and Home

sam-altman-caring-baby-impossible-without-chatgptNew reporting shows AI shifting from a productivity tool into a quiet companion for work, school, and even parenting. Futurism reports that Sam Altman has suggested that caring for a baby would be “impossible” without ChatGPT, a comment that reflects how deeply some users are beginning to rely on AI for emotional support, decision-making, and everyday problem-solving. That reliance is reshaping institutions as well. The Washington Post reports that colleges are increasingly turning to oral exams to counter AI-assisted cheating, signaling how widespread student use of generative tools has become—and how assessment itself is being redesigned in response. In the workplace, Axios reports that Microsoft is positioning Copilot as a personal productivity layer woven into daily workflows, encouraging workers to treat AI less as a novelty and more as a constant collaborator. Together, the stories suggest AI is no longer something people “use” occasionally, but something many are beginning to live with.

Alaska Moves Toward an AI-Powered Digital ID System

alaskaAIIDAlaska is exploring an ambitious plan to roll out an AI-powered digital identity system that could link personal identification with payments and public services, according to Reclaim The Net. State officials say the initiative is intended to modernize identity verification, reduce fraud, and streamline access to government programs by using artificial intelligence to authenticate residents more efficiently. The proposal would potentially integrate financial functions, allowing the digital ID to be used for payments, benefits, and official transactions. Privacy advocates, however, warn that combining identity, AI, and payment systems could create new surveillance risks and expose residents to data misuse or breaches. The plan highlights a growing tension as governments look to AI-driven digital infrastructure while grappling with concerns about consent, security, and long-term civil liberties.

AI Insider Warns of What Comes Next

anthropic-ai-scientist-doomA senior scientist at Anthropic is raising fresh alarms about the long-term risks of advanced artificial intelligence, arguing that the technology could eventually pose serious threats if left unchecked. Futurism reports that the researcher has publicly warned that powerful AI systems could lead to catastrophic outcomes, echoing concerns voiced by a growing number of AI insiders. The warning centers on the speed at which models are improving, the difficulty of aligning them with human values, and the lack of robust global governance. While Anthropic is known for emphasizing safety and responsible development, the comments highlight a widening gap between commercial pressure to scale AI and the unresolved question of how to control systems that may eventually exceed human oversight.

Is Model Context Protocol (MCP) the New Internet?

IMCPf you read my post on MCP earlier this year, you know it’s a new way of connecting AI servers to each other. Now, a growing group of AI companies is backing MCP as a new standard to make advanced models easier to connect, control, and deploy across systems. Led by the AI Alliance, the goal is to create a common way for AI models to access tools, data, and instructions. Proponents argue the protocol could reduce fragmentation by allowing developers to build applications that work across different models without custom integrations for each one. The push reflects a broader industry shift as AI companies move beyond model performance and focus on interoperability, governance, and real-world deployment. If widely adopted, MCP could quietly shape how AI systems communicate—and who controls the rules of that interaction.

AI Hackers Narrow the Gap With Humans

hackersArtificial intelligence is rapidly approaching human-level performance in cybercrime, raising alarms among security experts. The Wall Street Journal reports that AI-powered hackers are coming dangerously close to matching human attackers, with models now able to write convincing phishing emails, probe software vulnerabilities, and automate attacks at unprecedented scale. While human hackers still retain an edge in creativity and strategic judgment, researchers warn that AI systems are improving faster than defenses. The shift could dramatically lower the barrier to entry for cybercrime, enabling less-skilled actors to launch sophisticated attacks. The article underscores growing concern that as AI tools become more capable and widely available, cybersecurity may become one of the earliest and most consequential arenas where machines begin to outperform humans.

More Loop Insights