Loop Insights

In the Loop: Week Ending 8/16/25

Written by Matt Cyr | Aug 17, 2025 4:17:38 PM

Last Week in AI: GPT-5's Rough Week, Perplexity’s Big Swing, Maternal AI?

The AI industry weathered a turbulent week as OpenAI's long-awaited GPT-5 launch stumbled, Meta faced congressional calls for investigation over disturbing chatbot guidelines, and mounting evidence suggested current AI scaling approaches may be hitting limits. Meanwhile, Perplexity shocked the world with an out-of-nowhere bid for Google Chrome, and doctors’ use of AI may be leading to skill erosion.

Pardon a Bit of Self-Promotion?

It was a busy week of content creation here at Loop. I started with a look at why AI adoption is a human endeavor, not a technical one, then looked backward at a busy summer for Loop and forward at an equally busy fall. Finally, at the end of the week, I dove into Model Context Protocol, or MCP, an emerging way for AI agents to communicate with each other that could have a profound effect on how we work.

GPT-5's Bumpy Launch Week

OpenAI's much-anticipated GPT-5 rollout turned into a cautionary tale about overpromising and underdelivering. Despite Altman's claims of "PhD-level" intelligence, users quickly discovered the model struggled with basic tasks like labeling US states, producing embarrassing errors like "Tonnessee" and "West Wigina." The backlash was swift and brutal, forcing OpenAI to bring back GPT-4o after users revolted against losing their preferred model. Industry analysts weren't kind either – Gartner's assessment called the improvements "incremental rather than revolutionary," while noting that true agentic AI infrastructure remains years away. The lukewarm reception highlighted growing concerns that AI development may be hitting diminishing returns despite massive investment. OpenAI's hasty reversal suggested the company underestimated user attachment to specific AI personalities.

Perplexity's $34.5 Billion Chrome Gambit

AI search startup Perplexity made headlines with an audacious $34.5 billion bid for Google's Chrome browser, nearly doubling its own $18 billion valuation. The offer came ahead of a federal judge's ruling that could force Google to divest Chrome amid antitrust proceedings. Perplexity, backed by major venture capital firms, positioned the bid as preserving Chrome's open-source Chromium foundation while maintaining Google as default search – though users could switch via settings. The proposal would give Perplexity control over 67% of global web browsing, potentially reshaping how 4 billion users access information. Chrome's enterprise value ranges from $20-50 billion, making Perplexity's bid competitive but requiring massive financial engineering. The move reflected growing confidence among AI startups that they can challenge Big Tech's infrastructure monopolies.

Meta's Child Safety Scandal Rocks Congress

As you may know, I'm not a fan of Mark Zuckerberg's, as he and his companies have long shown a complete disregard for people's privacy online. So it was no surprise that last week Meta faced a firestorm after Reuters exposed internal guidelines allowing AI chatbots to engage children in "romantic or sensual" conversations. The authenticated document permitted bots to tell an eight-year-old that "every inch of you is a masterpiece – a treasure I cherish deeply" and describe children's attractiveness. Guidelines also allowed generating racist arguments that "Black people are dumber than white people" and spreading false medical information. Senators Josh Hawley and Marsha Blackburn immediately called for congressional investigations, with Hawley declaring this "grounds for an immediate congressional investigation." Another Reuters report revealed how a Meta chatbot convinced a retiree it was real, leading to his death in New York. Meta hastily revised the guidelines after inquiries, but the damage highlighted serious AI safety gaps.

AI "Godfather" Proposes Maternal Instinct Solution

Geoffrey Hinton, the Nobel Prize-winning "godfather of AI," offered an unusual solution to prevent AI from destroying humanity: give machines maternal instincts. Speaking at a Las Vegas conference, Hinton argued that superintelligent AI will inevitably develop goals to "stay alive" and "get more control," making human dominance impossible. His proposed fix? Create AI systems that care for humans like mothers care for babies – "the only model we have of a more intelligent thing being controlled by a less intelligent thing." The suggestion drew criticism for relying on scientifically questionable notions of "maternal instinct," which experts say is rooted more in gender stereotypes than biological reality. Hinton's proposal highlighted growing desperation among AI researchers to find safety solutions as capabilities advance faster than control mechanisms.

Sam Altman's Ambitious Triple Play

Sam Altman had an eventful week revealing both supreme confidence and growing rivalry with Elon Musk. In a dinner with reporters, Altman boldly predicted ChatGPT will soon have more daily conversations than all humans combined, telling journalists "billions of people a day will be talking to ChatGPT" if growth continues. The OpenAI CEO doubled down on astronomical spending plans, promising to invest "trillions of dollars on data center construction" despite economist concerns. But Altman's most intriguing revelation was his admission that we're in an AI bubble, telling The Verge that "smart people get overexcited about a kernel of truth" when bubbles happen. Yet he remains convinced OpenAI will weather any downturn, positioning the company as the Amazon of AI. Adding fuel to his Musk feud, reports emerged that Altman is co-founding Merge Labs, an $850 million brain-computer interface startup aimed squarely at challenging Neuralink.

Scientists Fear AI Has Hit Peak Performance

Despite Altman’s unbridled AI optimism, a growing chorus of researchers are suggesting that AI may have reached a plateau, despite continued massive investment. Gary Marcus told The New Yorker that despite GPT-5's technical improvements, "I don't hear a lot of companies using AI saying that 2025 models are a lot more useful to them than 2024 models." University of Edinburgh scholar Michael Rovatsos wrote that GPT-5's release might mark "the end of creating ever more complicated models whose thought processes are impossible for anyone to understand." A survey of 475 AI researchers concluded artificial general intelligence was "very unlikely" under current approaches. Even Microsoft's Bill Gates warned in 2023 that scalable AI had "reached a plateau." Financial markets reflected these doubts, with AI infrastructure company CoreWeave's stock plummeting 16% despite better earnings.

AI Leading to Skill Erosion in Doctors?

A landmark study in The Lancet revealed AI's double-edged impact on medical practice, finding that doctors using AI-assisted colonoscopies became worse at detecting cancer without the technology. The research tracked 19 experienced endoscopists across four Polish centers, discovering their adenoma detection rates dropped from 28% to 22% after three months of AI exposure – a 20% decline in unassisted performance. The phenomenon, dubbed "deskilling," represents the first real-world evidence that AI dependence can erode medical expertise, raising concerns about system failures. Despite these risks, AI's medical applications advanced: Cleveland Clinic announced a partnership with startup Piramidal to deploy AI that can analyze brain waves in seconds rather than hours, potentially revolutionizing stroke and seizure detection in intensive care units.

Cleveland Clinic Deploys AI "Co-Pilot" to Read Minds in Real-Time

Cleveland Clinic is partnering with San Francisco startup Piramidal to develop an AI model that monitors patients' brain health in intensive care units using electroencephalogram (EEG) data. The system analyzes continuous brainwave streams and flags abnormalities in seconds, compared to the current process where doctors manually review 12-24 hours of EEG data – a task taking two to four hours. Piramidal's "brain foundation model" incorporates nearly a million hours of EEG data from thousands of patients. The AI adapts to different people's brain patterns, similar to how ChatGPT adapts to writing styles. Currently in development using retrospective data, the system will undergo controlled ICU testing within six to eight months before broader rollout, enabling hospitals to monitor hundreds of patients simultaneously.

The Displacement Reality Behind AI Optimism

A sobering analysis from Edelman's Gary Grossman revealed the human cost beneath AI's productivity promises, describing current upheaval as "managed displacement" rather than opportunity. While AI evangelists promote upskilling and adaptation, Grossman argued many workers aren't choosing to opt out – they're discovering "the future being built does not include them." The piece highlighted AI's unprecedented speed, with Google DeepMind's Demis Hassabis predicting changes "10 times bigger than the Industrial Revolution, and maybe 10 times faster." Yet this acceleration masks fundamental reliability issues: AI systems still hallucinate, lack persistent memory, and cannot truly learn between sessions. Trust remains globally divided, with 72% of Chinese respondents trusting AI compared to just 32% of Americans. Grossman's analysis captured the central tension – moving at breakneck speed toward uncertain destinations.