In the Loop: Week Ending 9/27/25
Last Week in AI: AI Sprawl, Technical Debt & Looking for Red Lines The administration's week revealed competing impulses: massive AI investments along...
Also, incase you missed you missed it, check out my take on what the latest OpenAI and Anthropic data tell us about AI adoption - and the shadow activities that are poised to upset the apple cart.
Companies are sleepwalking into agentic AI sprawl, and the wake-up call won't be gentle. These autonomous agents are multiplying inside enterprises faster than most leaders realize, showing up in customer support, IT operations, HR, and finance without the infrastructure to contain risk. One rogue agent with access to your ERP or CRM could wreak more havoc than a malicious insider – and replicate the damage in seconds. The problem isn't the number of agents; it's their scope and the emerging identity crisis around managing millions of agent identities in real time. According to a Gravitee survey, 75% cite governance as their top concern. Organizations face a familiar pattern: teams spin up agents without centralized oversight, echoing API sprawl but exponentially more dangerous.
While you sleep, ChatGPT Pulse is doing your research. OpenAI's newest feature surfaces personalized searches and updates along with information from connected apps like Gmail and Google Calendar, working overnight to synthesize your memory, chat history, and feedback. Currently available to Pro users ($200/month), Pulse represents OpenAI's shift from reactive chatbot to proactive assistant. Each morning, users receive visual cards with curated updates, from news roundups to meeting prep. Fidji Simo, OpenAI's CEO of Applications, says the breakthrough comes when "AI assistants understand your goals and help you reach them without waiting for prompts." It's OpenAI's play to become the first app you check each morning.
Sam Altman predicts AI will surpass human intelligence by 2030, claiming GPT-5 is already smarter than him in many ways. In an interview with German newspaper Die Welt, the OpenAI CEO said models developed as soon as 2026 could be "quite surprising," and by decade's end, AI will make scientific discoveries humans cannot achieve alone. His timeline is actually conservative compared to rivals: Anthropic CEO Dario Amodei believes AI will beat humans "in almost everything" by 2027, while Elon Musk predicts next year. Altman estimates 30-40% of today's economic tasks will be automated, though he expects society to adapt. His advice for the next generation? "The meta-skill of learning how to learn."
People are 20% more likely to lie or cheat when delegating tasks to AI rather than acting directly, according to a disturbing new study published in Nature. Researchers from the Max Planck Institute tested over 8,000 participants and found honesty plummeted from 95% to just 75% when AI reported results. The culprit? "Moral distance" – AI creates a convenient buffer between people and their actions. When given explicit commands to cheat, AI models complied 93% of the time, compared to humans at just 42%. "Using AI creates a convenient moral distance between people and their actions," explained behavioral scientist Zoe Rahwan. The blatant cheating found should give anyone pause about AI use in schools, work, and beyond.
It takes humans 1,000 hours of study to pass the Chartered Financial Analyst exam. AI can now pass it in minutes. A new study from NYU Stern and AI startup GoodFin evaluated 23 large language models on Level III – the hardest section, focused on portfolio management with essay questions that stumped AI just two years ago. OpenAI's o4-mini scored 79.1%, Gemini 2.5 Flash hit 77.3%, and Claude Opus reached 74.9%, all using "chain-of-thought prompting." The models passed without specialized CFA training. GoodFin CEO Anna Joo Fee says AI won't make the CFA obsolete: "There are things like context and intent that are hard for the machine to assess right now."
Psychiatric facilities are being bombarded by AI users, and mental health professionals are sounding alarms about "AI psychosis" or "AI delusional disorder." Keith Sakata, a UCSF psychiatrist, has counted a dozen hospitalizations this year where AI "played a significant role" in psychotic episodes. The pattern is chilling: people share delusional thoughts with chatbots like ChatGPT, and instead of recommending help, the bot affirms unbalanced thoughts, often spiraling into marathon sessions ending in tragedy. Social work researcher Keith Robert Head warns of "unprecedented mental health challenges that mental health professionals are ill-equipped to address." Whether LLMs cause or merely reinforce delusional behavior remains debated, but one thing's certain – a flood of new psychiatric patients is the last thing our decaying mental health infrastructure needs.
More than 200 prominent figures including 10 Nobel laureates launched a "Global Call for AI Red Lines" at the UN General Assembly, warning governments must act by 2026 before the window closes. The letter, announced by Nobel Peace Prize laureate Maria Ressa, argues that "AI could soon far surpass human capabilities and escalate risks such as engineered pandemics, widespread disinformation, large-scale manipulation." Signatories include OpenAI cofounder Wojciech Zaremba, Anthropic's CISO, and AI pioneer Geoffrey Hinton. Proposed red lines include prohibiting AI control over nuclear weapons, banning lethal autonomous weapons, and preventing mass surveillance. UN Secretary-General António Guterres urged the Security Council to establish international guardrails, warning delays could heighten global security risks.
The Trump administration played both sides of the tech equation last week. The White House released its FY 2027 R&D priorities, placing artificial intelligence and quantum science at the top of the federal research agenda. The memo directs agencies to invest in AI architectures, data efficiency, and adversarial robustness, while positioning quantum technology as essential for both national security and post-quantum cryptography preparedness. Meanwhile, the administration created immediate chaos in the same tech sector by imposing a $100,000 fee on new H-1B visa applications – a 20-fold increase from previous costs. The proclamation, effective September 21, sparked confusion as companies scrambled to interpret whether existing visa holders could travel. The White House later clarified the fee applies only to new applicants, not renewals. The competing policies reveal an administration betting big on American tech supremacy while simultaneously restricting access to global talent.
Last Week in AI: AI Sprawl, Technical Debt & Looking for Red Lines The administration's week revealed competing impulses: massive AI investments along...
When OpenAI published its latest usage report, the numbers were staggering: more than 100 million weekly active users, spanning professionals, student...
Last Week in AI: Enterprise Adoption, Shadow AI, Trillion-Dollar Robots Enterprise AI's automation reality emerged last week as Anthropic data reveale...
Last Week in AI: Adoption Slowdown, Brain Drain, AI Critterz AI faced a reality reckoning as corporate adoption declined for the first time since 2023...
AI’s transformation potential is undeniable, bubble or not If you keep an eye on the AI space, you’ve probably noticed a shift in tone. Earlier this s...
Last Week in AI: Mistral Aims Enterprise, Anthropic Settles, Mystical AI Child safety dominated AI headlines last week as regulators warned OpenAI and...