In the Loop: Week Ending 6/7/25

This Week in AI: Infrastructure, Agents, and Anxiety

From OpenAI embedding itself into your daily workflow to Meta’s push for fully automated ads, the AI shift from tool to infrastructure is accelerating. Meanwhile, Apple stumbles, Phonely’s agents pass as human, and the FDA quietly deploys AI in regulation. But beneath the progress lies growing tension – about governance, privacy, jobs, and even memory. The AI era is expanding fast – and getting deeply personal.


Always On: Preparing to Live in the Age of Ubiquitous AI

Doctor-patientIn a world where AI is seamlessly embedded into our daily lives—from calendars that reschedule themselves to virtual agents handling tasks – we’re entering an “always on” era. In this post, I explore what it means to live and work alongside ever-present, invisible AI systems. Drawing from examples like Altman and Ive’s AI device and the rise of ambient assistants, I argue that this shift demands more than readiness – it calls for intentionality. We need to rethink how we define presence, focus, and autonomy as AI becomes not just a tool, but a constant companion in our lives.
Read more ->


ChatGPT Is Everywhere Now – and it’s Getting Complicated

ChatGPT

ChatGPT isn’t just a tool anymore – it’s turning into infrastructure. Last week alone, OpenAI launched integrations with Google Drive and Dropbox, making it easier than ever to pull insights from your files. A new “Sign in with ChatGPT” feature is also in the works, suggesting a future where OpenAI handles identity, not just conversation. The company now claims 3 million business users and introduced team-oriented features that put it in direct competition with Microsoft and Google Workspace. But as ChatGPT becomes embedded in everyday workflows, bigger questions emerge. A court order requires OpenAI to retain deleted user chats, raising fresh privacy concerns. And in a mind-bending Vox piece, ethicists ask whether advanced AI models could someday suffer – and whether we’d be obligated to care. We're entering a new phase of AI, that's increasingly powerful and personal – and it’s forcing us to rethink not just productivity, but personhood.
Drive/Dropbox | Sign-in | 3M users | Privacy | Ethics


Only 18% of Healthcare Orgs Have AI Governance in Place

74420_logo_wolters-kluwerWolters Kluwer’s 2025 “Future-Ready Healthcare” survey reveals a concerning gap in AI oversight: just 18% of healthcare organizations have formal governance in place for generative AI. Nearly half report increased cybersecurity risks since adopting AI, and 60% of executives say their organizations are not fully prepared for AI’s impact. While leaders acknowledge AI’s transformative potential – especially for patient engagement and operational efficiency – most are playing catch-up on policy, training, and risk management. The report is a clear call to action: for AI to truly benefit healthcare, institutional readiness needs to catch up with rapid technological adoption.
Read more →


U.S. Research Restrictions Could Hand China the AI Advantage

istockphoto-1089916444-612x612Helen Toner, former OpenAI board member and current director at Georgetown’s Center for Security and Emerging Technology, is warning that U.S. efforts to tighten control over AI research may backfire. In an interview with The Guardian, she argues that visa restrictions and limits on academic collaboration – especially with international students – are undermining the U.S.’s innovation edge. Meanwhile, China is rapidly advancing its AI capabilities with fewer constraints. Toner stresses that open scientific exchange is essential for both progress and safety in AI development. By trying to contain risk, the U.S. may be conceding long-term leadership in the field.
Read more →


FDA Introduces “Elsa,” a GenAI Assistant for Scientific Review

1_-pq_dLKroWxa7wbFFTtYyAThe U.S. Food and Drug Administration has quietly launched Elsa, a generative AI tool designed to support its scientific reviewers. Elsa is already being used to speed up critical regulatory tasks, including summarizing adverse events, comparing drug labels, and generating nonclinical review code. This marks one of the first meaningful uses of AI inside a major federal agency – not just as a pilot, but as a working part of the workflow. While it’s still early days, Elsa could signal a shift in how public institutions adopt AI: not as a disruptor, but as a silent efficiency engine.

Read more ->


AI Agents Are Getting So Good, Customers Can’t Tell They’re Not Human

nuneybits_Vector_art_of_diverse_call_center_0a39d368-40f6-464f-9487-a614964bec67

Phonely, a startup focused on AI-powered voice agents, just announced its customer service bots have reached 99% task accuracy – and most callers don’t realize they’re speaking to AI. These agents handle real-time, complex phone conversations with minimal hallucinations or dead ends. The implications are significant: not only is agentic AI advancing rapidly, but it’s also quietly replacing frontline human interactions. For brands, this raises questions about trust, transparency, and experience design. If your customers can’t tell they’re talking to a machine, should you tell them? And what happens to the human touch that builds brand loyalty?
Read more →


AI Agents Need to Talk to Each Other – Here’s How That’s Starting to Happen

AdobeStock_590812385_51e3b1Speaking of AI agents, as they proliferate, a new challenge is emerging: interoperability. VentureBeat explores the growing importance of agent-to-agent (A2A) communication and multi-agent collaboration protocols (MCPs), which aim to help AI systems work together across platforms, vendors, and contexts. The goal is to move beyond isolated tools toward networked intelligence – AI agents that can delegate, negotiate, and coordinate tasks autonomously. This shift could redefine workflows in marketing, healthcare, and beyond. But it also introduces new governance and trust issues: who’s accountable when one agent misinterprets another? The agent economy is coming – and it’s going to need a common language.
Read more →


Meta Plans to Fully Automate Ad Creation by 2026

meta_PNG12Meta is aiming to eliminate human intervention from its ad creation process by 2026, according to a report in The Wall Street Journal. The company is rapidly building toward AI systems that can generate ad copy, imagery, targeting strategies, and performance optimization – automatically. The vision is clear: advertisers would simply upload a product URL and let Meta’s AI handle the rest. For marketers, this raises the stakes on everything from creative differentiation to brand governance. It’s also a clear signal that AI isn’t just augmenting marketing workflows – it’s coming for the whole stack.
Read more →


Apple’s AI Future: Falling Behind or Already Winning?

004WYN6oJ7vi3BX76rDNk6Y-1As OpenAI eyes the AI wearables market, possibly with a new device designed by Jony Ive, some argue Apple has already won that battle – with the Apple Watch. Its seamless iOS integration and growing AI features set a high bar for newcomers. But internally, Apple is struggling. According to FT, efforts to modernize Siri with generative AI have led to bugs, delays, and investor anxiety. With WWDC 2025 approaching, Apple faces a dual challenge: prove it can lead in AI while defending its turf from rivals who are both chasing and bypassing the Siri era entirely.
Read Siri story → | Read Watch story →


What Do LLMs Actually Remember? A New Study Has Answers

cfr0z3n_stark_crisp_neat_pop_art_colorful_flat_illustration_pol_390d4d22-fe0b-4e8f-8582-f917f6e29d3bA major new study from Meta, Google DeepMind, NVIDIA, and Cornell reveals just how much large language models memorize – and the results are eye-opening. Researchers found that models frequently memorize and regurgitate training data, especially when it's unique or rare, raising concerns about copyright, privacy, and misinformation. The team also introduced a new benchmarking method to better detect this kind of “memorization leakage.” As enterprises grow more dependent on generative AI, this research underscores the importance of rigorous model testing, governance, and human oversight. Just because an AI sounds smart doesn’t mean it knows where the information came from.
Read more →


AI-First Workplaces Are Here – and Not Everyone’s Happy

imrsA growing number of companies, including Duolingo, Shopify, and Box, are embracing “AI-first” workplace cultures – where employees are expected to use AI tools whenever possible. Some firms now require staff to justify manual work if an AI could do it faster, while others assess job performance based on AI usage. Leaders tout this as a leap in efficiency and competitiveness. But critics say it’s creating a culture of anxiety, with fears of job loss, burnout, and declining service quality. The message is clear: adapt or risk being left behind – but at what human cost?
Read more


Is Today’s AI Chat UX As Good as It Gets?

exploring-ai-and-ux-synergy-in-financial-services-1693467666Lily Clifford, CEO of Rime Labs, argues that the current user experience of AI chatbots – clean, fast, and mostly ad‑free – may represent their peak quality. In the era of early search engines like Google, fewer ads and simpler interfaces made for smoother experiences. Now, chatbots like ChatGPT and Gemini deliver crisp, direct responses without sponsored distractions. Clifford’s Milan anecdote – where a chatbot recommended a local seamstress instead of a generic retailer – underscores their intuitive value. But she warns that as monetization intensifies (via ads, “answer‑engine optimization”), the uncluttered and useful chat experience could deteriorate.

Read more -> 


 

More Loop Insights