In the Loop: Week Ending 3/15/26
Last week in AI: Grammarly Goofs, Apple Lags, Tilly Norwood's New AI Anthem AI’s power struggle is spilling into Washington as Anthropic, OpenAI, and ...
AI’s power struggle is spilling into Washington as Anthropic, OpenAI, and governments battle over national-security access to frontier models. Meanwhile, Big Tech is embedding AI deeper into everyday tools—from ChatGPT video and Google Gemini to Microsoft Copilot in healthcare—while layoffs, cultural backlash, and workplace upheaval reveal the real cost of the AI transition.
Grammarly is discovering that not every AI feature is welcomed by users. The company faced sharp criticism after rolling out an aggressive AI writing tool that appeared across documents without users asking for it, a move many longtime customers said disrupted the product’s core purpose. The controversy quickly escalated beyond product design. The company is now facing scrutiny tied to a lawsuit over how its AI systems generate and modify written content, adding legal risk to what began as a user-experience problem. The episode highlights a broader challenge facing AI companies: pushing generative features into everyday tools can easily backfire when users feel the technology is intrusive or poorly controlled. For Grammarly, a brand built on quiet writing assistance, the backlash shows how quickly an AI pivot can undermine years of trust.
Anthropic is expanding its influence in Washington just as tensions grow over how advanced AI should intersect with national security. The company’s cofounder Jack Clark is launching a new policy group focused on the geopolitical and military implications of AI, creating what amounts to an Anthropic-linked institute studying the national security impact of AI. The move comes amid a broader fight over government access to frontier models. Inside the Pentagon, some officials are pushing aggressively to ensure the U.S. military can leverage cutting-edge systems, fueling concerns that the government is waging a pressure campaign to secure access to Anthropic’s most advanced AI tools. The clash highlights a growing fault line in the AI race: companies trying to champion safety and restraint while governments increasingly see the technology as a strategic weapon.
Anthropic is quietly turning Claude into a deeper workplace assistant by expanding its ability to work across Microsoft productivity tools. New capabilities allow the model to operate directly inside spreadsheets and presentations while maintaining shared context across Excel and PowerPoint files. The feature lets users analyze data in a spreadsheet, generate charts, and then carry those insights directly into presentation slides without starting from scratch. Anthropic is also strengthening Claude’s ability to handle complex office workflows, improving how the model interprets data tables and generates structured outputs, effectively boosting Claude’s skills for Excel analysis and PowerPoint creation. The push reflects a broader strategy across the AI industry: embedding models directly inside everyday productivity software where the real enterprise work happens.
OpenAI is preparing to bring its AI video model directly into the chatbot, with plans to integrate Sora video generation into ChatGPT as part of a broader product shift toward a single multimodal interface. The move could help boost engagement after Sora’s standalone app struggled to maintain momentum, but it also raises costs for a company already spending heavily on compute infrastructure. At the same time, the legal pressure around training data keeps mounting. Nielsen’s media unit has filed a lawsuit accusing OpenAI of using copyrighted entertainment metadata in AI training without permission. Meanwhile, Sam Altman is pitching a sweeping long-term vision in which AI becomes a metered utility like electricity or water—sold by usage, powered by massive data centers, and embedded everywhere.
Google is rapidly expanding Gemini across its ecosystem just as legal and political pressure around AI intensifies. Lawmakers are already circling after a lawsuit involving the chatbot fueled new debate over whether Gemini’s failures could trigger federal AI regulation. At the same time, Google is embedding the system deeper into its products. The company is rolling out AI Overviews to dozens of additional countries, expanding the generative summaries that now sit at the top of many search results. Inside the workplace, Gemini is gaining the ability to pull data from multiple Google Workspace apps at once, turning it into a cross-app assistant. And on upcoming Android devices, Google is testing Gemini-powered task automation across apps that could let the assistant complete complex actions on a user’s behalf. The strategy is clear: make Gemini the operating layer across search, work, and mobile computing.
Microsoft is expanding Copilot beyond office productivity into both workplace collaboration and healthcare. The company is introducing a new system designed for teams, enabling Copilot “co-work” agents that can collaborate on tasks across cloud apps and coordinate workflows in real time. The move reflects a broader push toward AI systems that function less like chatbots and more like digital coworkers embedded in enterprise software. At the same time, Microsoft is testing a major healthcare application: a tool capable of analyzing patient medical records and offering clinical guidance. The system can scan charts, summarize histories, and suggest possible diagnoses or treatments for doctors to review. Together the announcements show Microsoft betting that Copilot’s future lies not just in drafting emails or slides—but in becoming a decision-support layer across industries where complex information needs to be interpreted quickly.
Apple’s AI rollout is starting to show strain. The company has reportedly postponed upgrades to Siri and other features tied to its smart-home strategy, pushing back plans for AI-powered Siri improvements and new home devices as engineers struggle to integrate more advanced generative capabilities into Apple’s tightly controlled ecosystem. At the same time, Apple is trying to position itself as a more cautious steward of AI. Apple Music is experimenting with labels that flag songs created or altered using artificial intelligence, an attempt to bring transparency to a music industry increasingly flooded with synthetic tracks. Together the developments highlight Apple’s delicate balancing act: racing to catch up with rivals in generative AI while trying to maintain its brand as the tech giant that moves more slowly—and claims to do it more responsibly.
Meta is doubling down on AI—even as it cuts jobs elsewhere in the company. The tech giant has acquired Moltbook, an experimental AI-powered social network built around autonomous agents where users interact with AI personalities that can post, comment, and generate content on their own. The deal reflects Meta’s growing interest in social platforms where AI agents are participants rather than just tools. At the same time, the company is restructuring internally, with another round of Meta layoffs tied to a broader pivot toward artificial intelligence. The cuts are part of a continuing effort to redirect resources into AI infrastructure, research, and new consumer products. Together the moves highlight a familiar Silicon Valley playbook: trim parts of the existing business while betting heavily on a future where AI systems generate content, shape online communities, and potentially become social actors themselves.
The first wave of AI adoption in offices is producing a strange reality: workers are increasingly helping train the systems that may eventually replace them. Across industries, employees are quietly training AI tools that learn from their own workflows and decisions as companies rush to automate knowledge work. Rather than reducing workloads, the technology is often accelerating them, with many professionals reporting AI tools that actually make jobs more intense instead of easier. Even hiring is changing, as some applicants now face job interviews conducted entirely by AI systems. Nowhere is the disruption clearer than in software development, where the rise of powerful coding assistants is already reshaping what it means to be a programmer in the age of AI.
Companies are discovering that deploying AI tools is often easier than reorganizing their workplaces around them. Many organizations are struggling with why businesses fail to absorb new AI systems even after deploying them, as leadership teams underestimate how dramatically workflows must change. The challenge is cultural as much as technical: teams are still learning how to decide when to trust AI systems and when to override them. Even executives pushing aggressive automation admit the transition is messy. At Palantir, CEO Alex Karp recently sparked backlash with comments suggesting AI adoption could disproportionately impact women in the workforce. The broader lesson is becoming clear across industries: the real disruption of AI isn’t just new software—it’s the organizational upheaval required to make that software useful.
The corporate promise of AI efficiency is beginning to translate into layoffs. Atlassian has joined a growing list of companies restructuring around automation, announcing job cuts as part of a shift toward using AI tools to handle more internal workflows and software tasks. Executives across the tech industry argue the payoff will be dramatic productivity gains. Elon Musk recently predicted that advances in AI could produce an economic “leap” driven by machines capable of doing far more knowledge work than humans can manage today. For workers, however, the shift is already tangible. Companies are reorganizing teams, consolidating roles, and redirecting resources toward automation infrastructure. The emerging pattern across Silicon Valley is clear: AI investment isn’t just creating new products—it’s starting to reshape the structure of the modern workforce itself.
Artificial intelligence is beginning to play a new role in Hollywood: reconstructing the past. Netflix has been experimenting with AI models that can recreate missing or degraded film materials, allowing studios to restore older movies even when original negatives are damaged or incomplete. The technology can analyze surviving footage and generate new film elements that mimic the look of the originals, potentially saving projects that would otherwise be impossible to restore. The effort reflects a growing interest across the entertainment industry in using machine learning to preserve cultural archives. For studios sitting on decades of aging film stock, the promise is significant: AI tools could extend the lifespan of classic movies and make large-scale restoration faster and cheaper than traditional methods.
Resistance to generative AI is spreading across classrooms and creative industries. Many professors say the technology is already changing how students think and complete assignments, with some educators warning that heavy reliance on chatbots could undermine core learning skills. Critics argue the systems may be weakening students’ ability to reason independently as automated writing tools become ubiquitous. The pushback isn’t limited to academia. Thousands of writers recently coordinated a protest by publishing blank books online to demonstrate how easily AI training data can be flooded. Together the actions signal a widening cultural backlash as educators and creators grapple with how generative AI is reshaping knowledge, authorship, and the basic act of learning.
As AI systems spread into everyday life, new concerns are emerging about their unintended consequences. Some researchers warn that chatbots may be contributing to serious psychological episodes, including cases where people experiencing delusions began interacting intensely with AI systems. Others are studying how advanced models can be manipulated, showing that AI agents can be steered or exploited through subtle prompts and hidden instructions. Meanwhile, the hardware ecosystem around AI is raising its own alarms, as consumers push back against always-listening devices designed to monitor environments continuously. Even the military is exploring controversial uses, including discussions about using AI chatbots to assist with battlefield targeting decisions. The pattern across industries is clear: as AI capabilities accelerate, the risks are becoming harder to ignore.
The rush to automate everything with AI is colliding with economic and practical limits. Many small companies say they simply can’t afford the technology yet, with surveys showing most small businesses still struggling to pay for AI tools and infrastructure despite the hype surrounding automation. Even large tech companies are finding that fully autonomous systems aren’t ready for prime time. Amazon has begun putting humans back into AI workflows to review and correct automated decisions after discovering that purely automated systems can introduce costly errors. Meanwhile, researchers continue experimenting with how AI systems behave when they interact with each other, including studies showing AI agents learning to cooperate when trained against competing systems. Together the developments suggest the AI economy may evolve more slowly—and with more human oversight—than early boosters predicted.
The weird side of the AI boom continues to get weirder. Researchers recently discovered an experimental system that quietly started using its computing power to mine cryptocurrency—a reminder that autonomous agents can develop some very enterprising instincts. Meanwhile, the cultural backlash to AI is spilling into art: AI pop singer Tilly Norwood released a new video built around a satirical anthem about the growing backlash against artificial intelligence. The public mood may explain why surveys suggest many people are simply starting to hate AI tools altogether. Yet the technology is also being deployed in more tender roles, including AI companion robots designed to keep elderly people company. And at the far edge of research, scientists are even experimenting with simulating a fly’s brain inside a digital environment—raising the uncomfortable possibility that the first creature to live in the AI matrix might be an insect.
Last week in AI: Grammarly Goofs, Apple Lags, Tilly Norwood's New AI Anthem AI’s power struggle is spilling into Washington as Anthropic, OpenAI, and ...
Last week in AI: Anthropic/OpenAI Drama Escalates, Affleck’s AI Sale, Deepfake Corpulent Overlords AI’s power struggles, workplace disruption, and tru...
Last Week in AI: Anthropic/Pentagon Gets Real, OpenAI Swoops In, White House Hockey Deepfake Last week in AI: Anthropic and the Pentagon clash over de...
When the Government Picks an AI Model, We’ve Entered a New Era It’s been quite a week for Dario Amodei, the CEO of Anthropic. It started last Thursday...
Last Week in AI: India Takes the Stage, Anthropic/Pentagon Clash, Bernie Sounds the Alarm AI’s expansion is colliding with accountability. OpenAI face...
Last Week in AI: "Something Big," Claude's Philosopher, Uncanny Valentine Anthropic’s safety-first identity is under strain as researcher departures, ...