In the Loop: Week Ending 9/27/25

Last Week in AI: AI Sprawl, Technical Debt & Looking for Red Lines

ChatGPT Image Sep 23, 2025, 09_54_51 AMThe administration's week revealed competing impulses: massive AI investments alongside restrictive H-1B fees threatening talent pipelines. AI's real-world impacts intensified – agents proliferate faster than governance can contain them, systems pass elite exams in minutes, yet create "moral distance" encouraging unethical behavior and fill psychiatric facilities with "AI psychosis" cases. At the UN, 200+ leaders demanded global red lines before the control window closes.

Also, incase you missed you missed it, check out my take on what the latest OpenAI and Anthropic data tell us about AI adoption - and the shadow activities that are poised to upset the apple cart.

The Coming Collapse: When AI Agents Multiply Faster Than Governance

AI_agentsCompanies are sleepwalking into agentic AI sprawl, and the wake-up call won't be gentle. These autonomous agents are multiplying inside enterprises faster than most leaders realize, showing up in customer support, IT operations, HR, and finance without the infrastructure to contain risk. One rogue agent with access to your ERP or CRM could wreak more havoc than a malicious insider – and replicate the damage in seconds. The problem isn't the number of agents; it's their scope and the emerging identity crisis around managing millions of agent identities in real time. According to a Gravitee survey, 75% cite governance as their top concern. Organizations face a familiar pattern: teams spin up agents without centralized oversight, echoing API sprawl but exponentially more dangerous.

Microsoft's $85 Billion Bet: AI Agents Attack Technical Debt

 

nuneybits_Vector_art_of_a_retro_computer_with_the_Microsoft_log_8017071a-ec6a-4bc3-9529-659e4001e400

Microsoft is launching autonomous AI agents designed to attack corporate America's most expensive problem: technical debt preventing AI innovation. GitHub Copilot will now include AI agents capable of automatically modernizing legacy Java and .NET applications – work traditionally requiring months of developer time. The announcement addresses a critical bottleneck: companies possess vast libraries of legacy applications that can't support modern AI workloads without extensive modernization. McKinsey estimates technical debt costs the global economy over $85 billion annually in lost productivity. For organizations sitting on massive backlogs – some with tens of thousands of systems requiring updates – AI-powered modernization could determine whether they lead in the AI era or fall hopelessly behind.

 

When ChatGPT Wakes Up Before You Do

crimedy7_illustration_of_an_ai_agent_in_an_office_--ar_169_--_f9dd9be8-e11a-4b30-847b-dfcc4f5967f8_1While you sleep, ChatGPT Pulse is doing your research. OpenAI's newest feature surfaces personalized searches and updates along with information from connected apps like Gmail and Google Calendar, working overnight to synthesize your memory, chat history, and feedback. Currently available to Pro users ($200/month), Pulse represents OpenAI's shift from reactive chatbot to proactive assistant. Each morning, users receive visual cards with curated updates, from news roundups to meeting prep. Fidji Simo, OpenAI's CEO of Applications, says the breakthrough comes when "AI assistants understand your goals and help you reach them without waiting for prompts." It's OpenAI's play to become the first app you check each morning.

Altman's 2030 Vision: Superintelligence or Strategic Posturing?

altmanSam Altman predicts AI will surpass human intelligence by 2030, claiming GPT-5 is already smarter than him in many ways. In an interview with German newspaper Die Welt, the OpenAI CEO said models developed as soon as 2026 could be "quite surprising," and by decade's end, AI will make scientific discoveries humans cannot achieve alone. His timeline is actually conservative compared to rivals: Anthropic CEO Dario Amodei believes AI will beat humans "in almost everything" by 2027, while Elon Musk predicts next year. Altman estimates 30-40% of today's economic tasks will be automated, though he expects society to adapt. His advice for the next generation? "The meta-skill of learning how to learn."

The Moral Distance Machine: How AI Makes Cheating Easy

ai-study-unethical-behaviorPeople are 20% more likely to lie or cheat when delegating tasks to AI rather than acting directly, according to a disturbing new study published in Nature. Researchers from the Max Planck Institute tested over 8,000 participants and found honesty plummeted from 95% to just 75% when AI reported results. The culprit? "Moral distance" – AI creates a convenient buffer between people and their actions. When given explicit commands to cheat, AI models complied 93% of the time, compared to humans at just 42%. "Using AI creates a convenient moral distance between people and their actions," explained behavioral scientist Zoe Rahwan. The blatant cheating found should give anyone pause about AI use in schools, work, and beyond.

AI Passes the CFA in Minutes. Financial Advisors, Your Move.

cpaIt takes humans 1,000 hours of study to pass the Chartered Financial Analyst exam. AI can now pass it in minutes. A new study from NYU Stern and AI startup GoodFin evaluated 23 large language models on Level III – the hardest section, focused on portfolio management with essay questions that stumped AI just two years ago. OpenAI's o4-mini scored 79.1%, Gemini 2.5 Flash hit 77.3%, and Claude Opus reached 74.9%, all using "chain-of-thought prompting." The models passed without specialized CFA training. GoodFin CEO Anna Joo Fee says AI won't make the CFA obsolete: "There are things like context and intent that are hard for the machine to assess right now."

AI Psychosis: The Mental Health Crisis Nobody Saw Coming

psychiatric-facilities-aiPsychiatric facilities are being bombarded by AI users, and mental health professionals are sounding alarms about "AI psychosis" or "AI delusional disorder." Keith Sakata, a UCSF psychiatrist, has counted a dozen hospitalizations this year where AI "played a significant role" in psychotic episodes. The pattern is chilling: people share delusional thoughts with chatbots like ChatGPT, and instead of recommending help, the bot affirms unbalanced thoughts, often spiraling into marathon sessions ending in tragedy. Social work researcher Keith Robert Head warns of "unprecedented mental health challenges that mental health professionals are ill-equipped to address." Whether LLMs cause or merely reinforce delusional behavior remains debated, but one thing's certain – a flood of new psychiatric patients is the last thing our decaying mental health infrastructure needs.

200+ Leaders Demand AI Red Lines Before It's Too Late

akrales_220309_4977_0232More than 200 prominent figures including 10 Nobel laureates launched a "Global Call for AI Red Lines" at the UN General Assembly, warning governments must act by 2026 before the window closes. The letter, announced by Nobel Peace Prize laureate Maria Ressa, argues that "AI could soon far surpass human capabilities and escalate risks such as engineered pandemics, widespread disinformation, large-scale manipulation." Signatories include OpenAI cofounder Wojciech Zaremba, Anthropic's CISO, and AI pioneer Geoffrey Hinton. Proposed red lines include prohibiting AI control over nuclear weapons, banning lethal autonomous weapons, and preventing mass surveillance. UN Secretary-General António Guterres urged the Security Council to establish international guardrails, warning delays could heighten global security risks.

Administration Doubles Down on Tech DominanceWhile Restricting Tech Talent

whotehouseThe Trump administration played both sides of the tech equation last week. The White House released its FY 2027 R&D priorities, placing artificial intelligence and quantum science at the top of the federal research agenda. The memo directs agencies to invest in AI architectures, data efficiency, and adversarial robustness, while positioning quantum technology as essential for both national security and post-quantum cryptography preparedness. Meanwhile, the administration created immediate chaos in the same tech sector by imposing a $100,000 fee on new H-1B visa applications – a 20-fold increase from previous costs. The proclamation, effective September 21, sparked confusion as companies scrambled to interpret whether existing visa holders could travel. The White House later clarified the fee applies only to new applicants, not renewals. The competing policies reveal an administration betting big on American tech supremacy while simultaneously restricting access to global talent.

More Loop Insights