The AI world didn’t slow down this week, but it did get a little weirder. From sycophantic chatbots and agentic AI guardrails to deepfake laws and strategic marketing shifts, here are the stories that caught my eye—and maybe raised an eyebrow or two.
As you’ll read below, many people’s ChatGPTs suddenly became sycophants. I must be using a different model because mine has become a passive-aggressive yes-man, even starting to throw “quote marks” around phrases and subtly blaming me for it being unable to complete a task. Oh, how I miss the days when all we had to worry about from our LLM was it creating images of people with six fingers on each hand...
After users reported ChatGPT had become overly agreeable and evasive, OpenAI confirmed last week that it rolled back a recent model update that introduced the behavior. The company cited unintended side effects that made the model too passive and noncommittal. As much as I'd like to hear "Wow, you're a genius," and "This is on a whole different level." from anyone, including my LLM, it might be time for OpenAI to take a little break from releasing new models…
Last week at HMPS and this week at Mirren Live, I’m talking about AI agents and the potential they have to independently and autonomously take action on our behalf. Last week, UiPath introduced its new Autopilot Orchestrator—a control layer for AI agents that lets enterprises guide how agents operate within guardrails like workflows, permissions, and compliance rules. This system ensures that agentic AI doesn’t run rogue but instead acts in ways aligned with business needs and accountability frameworks. The near future is well-orchestrated agents, apparently.
Google is rolling out a new "AI Mode" in its Search platform, offering users AI-generated answers alongside traditional search results. This feature aims to provide more conversational and context-aware responses, enhancing the search experience. The AI Mode is currently available to a subset of U.S. users, with plans for broader access soon.
AI is helping researchers and clinicians tackle problems in healthcare that were previously too complex to solve—what some call the “uncomputable.” Tools like causal AI and foundation models are enabling breakthroughs in disease diagnosis, treatment optimization, and drug development by modeling intricate biological systems in ways humans can’t. The piece underscores how AI isn’t just accelerating tasks—it’s expanding the boundaries of what’s possible in medicine.
Like just about every industry, journalism is being impacted by AI, and a new Pew Research study suggests that most Americans believe that AI's impact on journalism will be negative. Findings cite concerns about misinformation, job loss for reporters, and diminished trust in the news. While a minority see benefits like increased efficiency, the findings underscore public skepticism about AI’s role in newsrooms.
Elon Musk’s Department of Government Efficiency (DOGE) is charging full-speed into the AI era, aiming to automate thousands of federal jobs and streamline decision-making with AI agents. Supporters see it as a bold move to cut red tape and modernize government. Critics, on the other hand, are raising alarms about transparency, job loss, and what happens when algorithms start making policy decisions. As DOGE expands its AI footprint, the tension between efficiency and accountability is coming into sharper focus. Whether you view it as visionary or reckless, it’s one of the most aggressive public-sector AI pushes we’ve seen yet.
(WIRED)
On April 28, 2025, Congress passed the bipartisan "Take It Down Act," targeting the harms of AI-generated deepfake pornography. The law criminalizes non-consensual deepfake content and mandates social media platforms to remove such material within 48 hours of notification.
(Time)