Last Week in AI: AI Actors, Slack is Listening, Salesforce is Trusting
Hollywood embraced AI actors and directors while SAG-AFTRA protested. Meta monetizes chatbot conversations and fights regulation as California passes landmark AI safety laws. OpenAI expands into HR, finance, and video while Apple secretly builds Siri's replacement. Enterprise AI stumbles on trust issues, Slack surveils workplace chats, and autonomous agents now control your credit card.
Hollywood's AI Reckoning: A Digital Actress and an Algorithm Director
The entertainment industry faced twin AI controversies this week when SAG-AFTRA condemned the use of "Tilly Norwood," a fully synthetic AI actress created without human performance capture, in an upcoming film. The union warned this represents an existential threat to actors' livelihoods, arguing that AI-generated performances undermine the craft and legal protections performers have fought to establish. Meanwhile, producer Andrea Iervolino announced his film The Sweet Idleness was entirely directed by an AI system he developed, with the algorithm handling everything from shot selection to editing decisions. Iervolino framed the project as artistic experimentation, but critics question whether removing human creative direction crosses an ethical line. Together, these developments signal Hollywood's growing comfort with AI replacing not just below-the-line technical roles, but core creative positions—a shift that may fundamentally reshape what filmmaking means and who gets to participate in it.
The AI Job Displacement Debate: Hype Versus Reality
The question of AI's workforce impact produced contradictory answers this week. A Yale study found the US jobs market hasn't yet been seriously disrupted by AI, with researchers seeing minimal employment effects despite widespread ChatGPT adoption. The findings suggest fears of immediate mass unemployment may be overblown. Yet CBS News reported growing concerns about AI's impact on trade jobs and white-collar work, highlighting workers who feel the threat looming even if statistics don't yet reflect it. Most strikingly, OpenAI's own research revealed ChatGPT can already replace significant portions of work tasks across numerous professions, from customer service to content creation. The disconnect between current employment data and demonstrated AI capabilities suggests we may be in a deceptive calm before displacement accelerates—or that AI's workplace integration will prove more complementary than substitutional.
Meta's Regulatory Gambit Meets California's Safety Push
As states ramp up AI oversight, Meta launched a super PAC to combat regulation it deems "overly restrictive," pouring millions into lobbying efforts and political campaigns. The company argues that heavy-handed rules will stifle innovation and hand competitive advantage to China, positioning itself as defender of American AI leadership. Yet just days later, California Governor Gavin Newsom signed SB-53, landmark legislation requiring AI companies to implement safety protocols and disclose training data sources. The bill represents the nation's most comprehensive AI accountability framework, mandating transparency around model capabilities and potential harms. Meta's aggressive opposition collides directly with California's regulatory momentum, setting up a precedent-defining battle between Big Tech's self-regulation philosophy and government-imposed guardrails. The outcome will likely shape how AI development is governed nationwide.
OpenAI's Triple Expansion: HR, Hollywood, and Wall Street
OpenAI is aggressively expanding beyond chatbots into new territory. The company is developing HR tools that could challenge LinkedIn's dominance, aiming to revolutionize recruiting, talent management, and professional networking through AI-powered matching and analysis. Meanwhile, OpenAI announced Sora 2, its next-generation video and audio creation app, promising to democratize film production with unprecedented realism and creative control—though concerns about deepfakes and copyright loom large. Perhaps most audaciously, ChatGPT now offers stock investment advice, analyzing markets and suggesting portfolio strategies for users willing to trust AI with their financial futures. This three-pronged assault on professional services, creative industries, and finance signals OpenAI's ambition to become infrastructure for knowledge work itself—raising questions about whether any white-collar domain remains beyond AI's reach.
Apple's Secret Siri Successor: Meet Veritas
Apple is developing Veritas, an advanced AI chatbot designed to replace Siri entirely, according to internal documents. The project represents Apple's acknowledgment that Siri has fallen hopelessly behind competitors like ChatGPT and Google's Gemini. Veritas will leverage large language models while maintaining Apple's privacy-first philosophy through on-device processing for sensitive queries. The system aims to handle complex multi-step tasks, understand contextual follow-ups, and integrate deeply across Apple's ecosystem—from managing calendars to controlling smart home devices. Development timelines suggest a potential 2026 launch, though Apple's cautious approach to AI could delay it further. The shift signals that even Apple, long resistant to cloud-dependent AI, recognizes voice assistants need fundamental reimagining to remain relevant in the generative AI era.
Slack's AI Listens in on Every Workplace Conversation
Slack is granting its AI unprecedented access to workplace conversations, scanning messages across channels to power automated summaries, search improvements, and intelligent recommendations. The company insists data remains encrypted and isn't used for external AI training, but privacy advocates are concerned. Slack's AI analyzes tone, identifies action items, surfaces relevant documents, and suggests responses—all by processing intimate company communications. For employees, virtually nothing shared in Slack remains private from algorithmic scrutiny, even in supposedly secure channels. Organizations can't fully opt out without losing core functionality. As workplace collaboration tools become AI-powered surveillance infrastructure, the line between productivity enhancement and invasive monitoring grows dangerously thin, raising urgent questions about digital workplace rights in the AI age.
Meta to Monetize Your AI Conversations with Targeted Ads
Meta plans to sell targeted advertisements based on data extracted from users' AI chatbot conversations, marking a disturbing new frontier in surveillance capitalism. When users interact with Meta AI across Instagram, Facebook, and WhatsApp, the company will analyze conversation content to build detailed preference profiles for ad targeting. Meta argues this creates more "relevant" advertising experiences, but critics see it as weaponizing intimate AI interactions users might assume are private. Unlike public social media posts, chatbot conversations often involve personal problems, health concerns, relationship issues, and financial struggles—precisely the vulnerable moments advertisers covet. The move transforms AI assistants from helpful tools into data extraction engines. Users seeking advice from AI may soon find their deepest concerns packaged and sold to the highest bidder.
Hopper's AI Agent Books Flights and Cancels Trips Autonomously
Travel app Hopper launched an AI agent that books flights and cancels trips without requiring human approval, representing one of the first truly autonomous AI shopping assistants. Users set preferences and budgets, then the AI monitors prices, makes purchasing decisions, and handles cancellations when better options emerge—all while you sleep. The system navigates complex airline websites, processes refunds, and manages rebooking logistics that typically frustrate travelers. Hopper claims its AI saves users an average of $50-100 per booking through constant price monitoring and strategic timing. The technology addresses a genuine pain point in travel planning, but raises questions about liability when AI makes expensive purchasing decisions independently. The convenience of "set it and forget it" shopping collides with the risks of algorithms controlling your credit card.
Salesforce Builds AI Trust Layer as Enterprise Deployments Falter
Salesforce launched an AI Trust Layer designed to address widespread enterprise deployment failures, acknowledging that most corporate AI projects collapse due to data governance and security concerns rather than technical limitations. The platform provides guardrails for monitoring AI outputs, preventing data leaks, and ensuring compliance with industry regulations—critical infrastructure absent from most enterprise AI tools. Salesforce's move responds to evidence that companies rush AI adoption without adequate safety frameworks, leading to failures when systems hallucinate in customer-facing scenarios or expose sensitive information. The Trust Layer acts as middleware between AI models and enterprise data, filtering queries and validating responses. As one of the first major platforms prioritizing governance over features, Salesforce bets enterprises will pay premium prices for trustworthy production AI.
Stop Drifting: What's Missing From AI Transformation
A VentureBeat analysis reveals why most AI transformation efforts fail: lack of intentional design principles guiding implementation. Companies typically "drift" into AI adoption, deploying tools reactively without strategic frameworks for how AI should enhance workflows versus replace them. Successful transformations require clear principles: defining which tasks humans should retain, establishing feedback loops for continuous improvement, and designing AI as collaborative partner rather than autonomous replacement. Without explicit design philosophies, organizations default to cost-cutting automation that demoralizes workers and delivers disappointing results. Effective AI integration demands intentional choices about augmentation versus substitution, transparency versus opacity, and centralized versus distributed control. As enterprises pour billions into AI, those articulating clear design principles will separate transformation success from expensive failure.
Thinking Machines Debuts Tinker
Thinking Machines, founded by former OpenAI CTO Mira Murati, released Tinker, an API designed to make AI systems more controllable and interpretable. The tool addresses a critical gap: developers can't easily understand or steer model behavior without extensive fine-tuning. Tinker provides real-time visibility into AI reasoning processes and enables precise interventions when systems veer off course. Unlike black-box models offering only inputs and outputs, Tinker exposes intermediate decision-making steps, allowing developers to diagnose problems and guide AI toward desired behaviors. The launch signals Murati's strategic focus on AI safety and reliability over raw capability—a philosophical departure from OpenAI's scale-at-all-costs approach. With $2 billion backing, Tinker begins Murati's vision for transparent, human-steerable AI.