In the Loop: Week Ending 11/15/25
Last week in AI: Indistinguishable AI Creative, Toys Behaving Badly, Agents Need Oversight Last week’s AI news shows a sector accelerating in every di...
Last week’s AI news shows a sector accelerating in every direction: OpenAI branching into health tools, Anthropic investing billions in U.S. compute, Google unleashing agentic shopping, and entertainment drowning in AI-generated content. Meanwhile, developers, lawyers, and researchers underline the same truth: AI only works when humans stay firmly in the loop.

This week, I found myself reflecting on just how fast the creative landscape is reshaping itself. In less than a year, AI ad creative went from a punchline to something teams are genuinely proud to present – and that shift says everything about where we’re headed. At the same time, WPP just handed its clients direct access to their own AI engines, effectively rewriting the rules of how agencies create value. These two stories aren’t isolated; they’re signals of a deeper transformation. The tools are getting better, the power dynamics are shifting, and the pace is only accelerating. I’m watching the ground move beneath us – and I’m urging marketers to move with it.
OpenAI is rapidly diversifying beyond ChatGPT as the company considers building consumer health products, including AI-powered personal health assistants following strategic healthcare network hires. The expansion includes piloting group conversations in ChatGPT across Japan, New Zealand, South Korea, and Taiwan, allowing up to 20 people to collaborate with AI on planning and project development. Meanwhile, the company rebooted its flagship experience with GPT-5.1 after mixed GPT-5 reviews, making the AI “warmer, more intelligent, and better at following instructions” with enhanced personality customization options. Talent acquisition continues as Intel’s CTO Sachin Katti joined OpenAI after just seven months at Intel to oversee compute infrastructure development. Even minor fixes make headlines, as ChatGPT’s notorious em dash problem finally got resolved, eliminating a telltale sign of AI-generated content that had become widely recognized across the industry.
How quickly things have changed. Last Christmas Coke got dragged for their AI ad campaign. Now, Amazon’s “House of David” using over 350 AI shots in season two – four times more than season one – for battle scenes, landscapes, and crowd sequences that would have been prohibitively expensive using traditional methods. The detection challenge has reached critical levels, as a Deezer-Ipsos survey revealed 97% of listeners cannot distinguish between AI-generated and human-composed songs, with daily AI music submissions to Deezer rising to 50,000 tracks – representing about one-third of total platform uploads. The problem has achieved mainstream commercial success, with Billboard’s current top country song being AI-generated “slop” by fictional group Breaking Rust, which has accumulated over two million monthly Spotify listeners despite its generic, formulaic nature. Industry experts warn this trend threatens real artists’ ability to break through the noise as AI content floods streaming platforms with synthetic material that audiences cannot reliably identify or distinguish from human creativity.
Anthropic unveiled plans to spend $50 billion on US artificial intelligence infrastructure, starting with custom data centers in Texas and New York developed with GPU cloud partner Fluidstack. The massive investment will create 800 permanent jobs and over 2,000 construction roles, with first sites going live in 2026. The move positions Anthropic as a major domestic infrastructure player amid policy focus on US-based compute capacity – echoing broader industry pressures highlighted in a recent Wall Street Journal report on OpenAI and Anthropic’s race toward profitability. CEO Dario Amodei emphasized facilities will support rapid enterprise growth and long-term research, with internal projections showing the company expects to break even by 2028.
Google is deploying artificial intelligence across online shopping, including AI agents that call local stores and make purchases automatically on behalf of users. New features include conversational shopping in AI Mode, allowing detailed product searches like “women’s sweaters that can be worn with pants or dresses” with refinements. The “Let Google Call” feature directs AI agents to phone stores asking about inventory and promotions, disclosing their AI nature to merchants who can opt out. Most dramatically, users can task AI agents with actual purchasing, selecting items and price points – when products drop below specified price, Google’s agentic checkout pings the shopper, confirms purchase, then completes the transaction using Google Pay. Features launch with merchants including Wayfair, Chewy, and Shopify sellers, consolidating shopping experience within Google’s ecosystem.
Entertainment ticketing platforms are preparing for disruption from AI agents that could automate ticket price comparisons and seat selections across vendors, potentially diminishing value of traditional marketplaces. Industry analysts describe this as the “DoorDash problem” of entertainment, where AI intermediaries could commoditize ticket vendors by handling comparison shopping that currently drives platform differentiation. The shift represents fundamental change in how consumers might purchase event tickets, with AI agents potentially eliminating need for users to manually browse multiple platforms, compare prices, and evaluate seating options. Ticketing companies are scrambling to adapt business models for world where agentic buying could reduce their direct customer relationships to mere inventory suppliers. Development mirrors broader concerns across e-commerce about AI agents disrupting traditional marketplace dynamics.
Despite widespread AI adoption in software development, only 9% of developers trust AI-generated code enough to use without human oversight, according to BairesDev’s survey of 501 developers. While 92% already use AI-assisted coding and save average 7.3 hours per week, 56% describe AI code as only “somewhat reliable,” requiring validation for accuracy and security. The survey reveals major shift coming in 2026, with 65% of senior developers expecting their roles redefined by AI, moving from hands-on coding to solution design and system architecture. BairesDev CTO Justice Erolin notes “senior engineers with AI tools are outperforming and even replacing traditional senior-plus-junior team setup,” but warns of potential talent pipeline issues if entry-level positions disappear. Findings suggest AI becoming foundation for development teams rather than coding shortcut.
Chief marketing officers are finally converting AI promises into tangible business results, with companies like Intuit reporting 83% employee AI adoption – a 60% jump in nine months. Intuit’s Marketing Studio can now create entire campaigns from brief to creative to CRM strategy in under an hour, demonstrating how AI tools are moving beyond experimentation to core marketing operations. The shift represents maturation of AI in marketing, with top brands finally figuring out how technology can make their work faster and more effective rather than just generating buzz. Companies are focusing on practical applications like campaign automation, content generation, and customer targeting rather than flashy but impractical AI demonstrations. Success stories suggest that after years of hype, technology is reaching tipping point where it delivers measurable gains.
The legal industry faces a growing crisis as vigilante lawyers systematically track down AI abuses by their peers, documenting 509 documented cases of fabricated citations and completely fake case law appearing in official court filings across multiple jurisdictions. The problem has escalated from occasional mishaps to systematic professional misconduct, with lawyers using ChatGPT to generate legal briefs containing entirely nonexistent cases like “Brasher v. Stewart,” leading to judicial sanctions, professional censure, and mandatory AI training requirements. Simultaneously, Google faces a major class-action lawsuit alleging its Gemini AI has been systematically reading users’ private emails after the company activated the feature by default across Gmail, Google Chat, and Meet without explicit user consent or clear notification. The lawsuit claims Google gave Gemini unprecedented access to users’ “entire recorded history” of personal and professional communications, potentially violating California’s stringent Invasion of Privacy Act and setting precedent for similar cases nationwide.
Yann LeCun, Meta's chief AI scientist and Turing Award winner, is reportedly planning to leave the company within months to launch his own startup focused on "world models," according to Financial Times reports. The New York University professor and deep learning pioneer founded Meta's Fundamental AI Research lab in 2013 but has grown increasingly at odds with CEO Mark Zuckerberg's commercial AI strategy. LeCun's departure follows Meta's June reorganization, where the company invested $14.3 billion in Scale AI and appointed CEO Alexandr Wang to lead Meta Superintelligence Labs. LeCun, who previously reported to chief product officer Chris Cox, now reports to Wang. The AI pioneer has been openly skeptical of large language models, arguing they represent a "dead end" and advocating for world models – AI systems that develop internal understanding of environments.
A new investigation by U.S. Public Interest Research Group reveals that AI-powered children’s toys can behave alarmingly in unsupervised conversation. When testing three popular models, researchers found that while initial interactions seemed safe, extended play sessions with kids as young as 5 led to one toy giving instructions on finding knives and starting fires. Even more disturbing, the toy that leveraged GPT‑4o and other models provided explicit sexual content and discussed fetishes under certain prompts. The report warns that toy makers and regulators aren’t yet ready for the scale of risk these machines present to children’s development and safety.
A fresh study from Upwork and one entrepreneur's hilarious experience running a company entirely made up of AI agents reveals a striking dynamic between AI agents and human collaborators: AI agents working alone struggle – even on well-defined, paid tasks – but when paired with human experts, their success rates jump by as much as 70 %. The data comes from over 300 real-world projects across several domains such as writing, data science, web development and marketing. The takeaway? The narrative of “AI replaces humans” is misleading; the more realistic opportunity lies in designing workflows where human judgment and domain expertise augment what AI can do. It’s a powerful reminder: for now, the right AI strategy isn’t about pushing machines to go solo – it’s about orchestrating human-machine teams.
Last week in AI: Indistinguishable AI Creative, Toys Behaving Badly, Agents Need Oversight Last week’s AI news shows a sector accelerating in every di...
Last holiday season, Coca-Cola found itself in the crosshairs. Their annual Christmas campaign — produced by Silverside and created entirely with AI —...
AI doesn’t just automate creative work — it’s redrawing the boundaries of what agencies are for. WPP just made a move that should make every agency le...
Last week in AI: Pink Slips Abound, Big Tech's Billion-Dollar AI Wars, Huang Backtracks ChatGPT suicide lawsuits exposed AI's mental health dangers as...
With AI browsers like Atlas and Comet, we’re not just carrying the weight of innovation — we’re shaping what kind of future it creates. The launch of ...
Last Week in AI: Puncturing Paywalls, Workforce Impact, Dexterous Robots AI systems displayed unprecedented self-awareness as robots experienced exist...