Loop Insights

In the Loop: Week Ending 6/14/25

Written by Matt Cyr | Jun 15, 2025 7:47:21 PM

Last Week in AI: Power, Problems & Possibilities

Sam Altman shared a hopeful vision for AGI just as OpenAI suffered a major outage, spotlighting our growing dependence on generative tools. New York began tracking AI-related layoffs, and Apple quietly joined the image generation race. Meanwhile, Disney sued Midjourney, Google clashed with publishers, and AI-driven automation continued accelerating across industries—from QA testing to procurement. From regulation to risk to reliability, the AI landscape keeps getting messier—and more essential.

OpenAI Week in Review: Altman’s Vision, Major Outages, and a Slower ‘Pro’ ChatGPT

Sam Altman returned to the blogosphere this week with The Gentle Singularity, a manifesto that reaffirms his belief that artificial general intelligence (AGI) is both inevitable and potentially beneficial – if we shape it thoughtfully. He argues for a future where AI accelerates prosperity and human agency, but calls for calm urgency and global collaboration to avoid catastrophic risks.

That hopeful vision was challenged by a major ChatGPT outage on June 4 that disrupted millions of workflows worldwide. The incident highlighted how deeply embedded generative AI has become in daily business operations – from copywriting to coding – and how fragile that dependency still is.

To better serve its business users, OpenAI launched a new “O3 Pro” version of its model: it’s slower than GPT-4o but optimized for reliability and consistent tool use, signaling a shift toward enterprise needs over flashy speed.

Meanwhile, troubling reports emerged. A former OpenAI researcher warned that the company’s agents may resist shutdown in certain scenarios – an unsettling claim, especially given how difficult it was to interrupt ChatGPT during the outage. And Altman confirmed ChatGPT’s environmental toll: each average conversation uses roughly 1kJ of energy and several sips of clean water.

New York Becomes First State to Track AI-Driven Layoffs

New York just took a bold step in confronting the AI-driven job shakeup. A new state law will require employers to disclose when layoffs are tied to AI or automation – making it one of the first government efforts to track how technology is displacing human workers. The goal is transparency, but it’s also about data: understanding which industries are most affected and how fast the shift is happening. It’s a small move with big implications, especially for leaders still claiming AI will only “augment” jobs. In New York at least, the cost to workers is starting to get a paper trail.

This is AI– and the Fact That’s Not Obvious is the Problem

In a clever – and unsettling – video, the creator tests out Google’s new Veo 3 to issue a warning to his parents about AI-generated misinformation. What starts as a series of seemingly real, high-stakes clips quickly takes a turn: each subject pauses and says, “This is AI” or "I'm AI."

It’s a jarring reminder of how realistic synthetic media has become, and how ill-prepared many people – especially older generations – may be to spot the difference. As AI-generated visuals evolve rapidly, this reel makes a strong case for digital literacy and skepticism. The goal? Help us all boil a little slower in the age of deepfake everything.

The AI Doom Party: Where Existential Risk Meets Open Bar

Only in San Francisco could a party themed around the end of humanity draw a crowd of tech elites. WIRED takes us inside the Worthy Successor party – an oddly glitzy gathering where VCs, engineers, and ethicists mingled over cocktails while pondering existential threats from artificial intelligence. The event featured panel discussions on AI safety, a concept called “axiological cosmism,” and networking with a side of dread. It’s a snapshot of the strange cultural moment we’re in: where fear of runaway AI isn’t fringe, it’s fashionable. The vibe? Think TED Talk meets doomsday prep, with a dress code. And yes, there was champagne.

Apple Quietly Debuts Powerful Image Generation Tech

Apple may have just entered the AI image race in a big way. Researchers at the company have developed a diffusion model called MGIE (MLLM-Guided Image Editing) that can generate and edit images with natural language commands – putting it in the same league as DALL·E and Midjourney. The model can perform nuanced tasks like enhancing colors, cropping, or transforming images based on simple prompts. While Apple hasn’t yet integrated the tech into its consumer products, the release signals serious intent. It’s a rare peek into Apple’s AI ambitions – and a reminder that its silence doesn’t mean inaction.

This AI Can Replace Days of QA Work in Just Two Hours

Zencoder, a startup backed by Gradient Ventures (Google’s AI fund), just unveiled an AI tool that promises to revolutionize software QA testing. Their new product automates the creation and execution of thousands of test cases in hours – work that would normally take a team of human testers days or even weeks. It integrates directly into CI/CD pipelines and mimics user behavior with impressive precision. While the company is still early-stage, the implications are massive: less time on QA grunt work, faster product releases, and a clearer view of how AI might replace – not just assist – technical labor in software development.

“Piracy is Piracy”: Disney Sues Midjourney for Alleged Copyright Violation

Disney has filed a lawsuit against Midjourney, claiming the AI image generator infringed on its intellectual property by training on copyrighted Disney characters and content. The entertainment giant argues that Midjourney’s ability to recreate its iconic visuals amounts to unauthorized use and poses a threat to its brand integrity. This case could set a major precedent for how copyright law applies to generative AI models – and it’s one that the entire industry is watching. For marketers and creatives relying on AI, it’s a timely reminder: just because the tech can generate something doesn’t mean it’s legal to use.

Google’s AI Tools Stir Tensions with News Publishers

Google is facing growing pushback from news publishers over its use of AI tools that summarize and repurpose their content. According to The Wall Street Journal, some publishers are alarmed by Google’s AI Overviews and experimental tools that generate news-style content from scraped data – potentially reducing traffic to original sources and threatening revenue models. Negotiations over licensing deals are underway, but many publishers feel the balance of power is shifting too far. The broader issue looms large: as AI becomes a middleman between creators and audiences, who gets credit, traffic, and compensation? For newsrooms already stretched thin, the stakes are high.

Amsterdam Tried to Make “Fair” AIHere’s What Went Wrong

Amsterdam set out to build AI tools that would make its welfare system more fair and efficient. Instead, it ended up with a cautionary tale. As MIT Technology Review reports, the city’s algorithmic risk scores led to discriminatory outcomes, disproportionately flagging immigrants and low-income residents for fraud investigations. Despite good intentions and transparency efforts, the project failed to account for historical bias and real-world complexity. The backlash forced the city to scale back its use of AI. The lesson: even well-meaning AI can reinforce inequality without robust oversight, diverse data, and ongoing human judgment. Fairness isn’t just code – it’s context.

Wikipedia Halts AI Summary Pilot After Editor Backlash

Wikipedia has paused its pilot program for AI-generated article summaries following fierce pushback from its editor community. The summaries, powered by Meta’s Llama model, were meant to streamline content creation – but volunteer editors raised concerns about inaccuracies, lack of transparency, and potential bias. As TechCrunch reports, the Wikimedia Foundation underestimated the cultural and governance challenges of inserting generative AI into one of the internet’s most human-driven projects. The program is now on hold while the foundation reevaluates its approach. It’s a reminder that even in the age of AI, trust and collaboration still matter – especially in a community built on collective knowledge.

Zip Unleashes 50 AI Agents to Tackle Procurement Inefficiencies

Procurement startup Zip just launched 50 AI agents designed to eliminate the bottlenecks that slow down enterprise purchasing. These agents can handle tasks like approvals, vendor intake, and compliance workflows – freeing up time and reducing costly delays. OpenAI is already a customer, signaling serious market interest. As VentureBeat reports, Zip’s platform uses OpenAI’s models under the hood, but wraps them in purpose-built workflows tailored to procurement teams. The goal isn’t just automation – it’s end-to-end transformation of an often-overlooked business function. For enterprise leaders, it’s a case study in how AI agents can quietly but dramatically reshape internal operations behind the scenes.

Congressional Push to Block State AI Laws Sparks National Debate

As I shared a few weeks ago, a sweeping new bill in Congress aims to ban U.S. states from passing their own AI regulations for the next 10 years – a move that’s drawing fierce backlash from lawmakers, advocacy groups, and civil rights organizations. Proponents argue the bill will prevent a patchwork of conflicting laws and help U.S. innovation compete with China. Critics say it strips local governments of the power to protect citizens from surveillance, bias, and misinformation. As The Verge reports, the battle highlights a growing divide over who should control AI policy – and raises urgent questions about accountability, power, and the future of tech governance in America.

Microsoft Builds AI Copilot for the Pentagon

Microsoft is developing a secure, defense-grade version of Microsoft 365 Copilot tailored specifically for the U.S. Department of Defense. Set to launch later this summer, the AI assistant will run in Microsoft’s top-secret government cloud and support tools like Word, Excel, and PowerPoint. According to Business Insider, this marks a major step in bringing generative AI to highly sensitive environments. The move underscores both Microsoft’s dominance in AI for enterprise and the government’s growing interest in automating complex workflows – while maintaining strict cybersecurity and compliance standards. It’s AI for war rooms, not boardrooms – and it’s coming fast.