Last Week in AI: Schumer Deepfake, "I'm AI" and NSFW ChatGPT
Political deepfakes escalate tensions as X refuses to act on a Schumer clip. California mandates AI self-disclosure and chatbot safety rules. OpenAI’s erotica plan stirs controversy, while Anthropic’s Claude gains new “Skills.” Labor demands a say in AI policy, and experts debate whether AI is plateauing – or just evolving.
Schumer Deepfake Puts X’s Integrity – and Democracy’s – on the Line
Senate Republicans posted a deepfake of Chuck Schumer that makes it look like Democrats are cheering the government shutdown, twisting his line “every day gets better for us” from a healthcare-strategy interview. Despite rules against deceptive synthetic media likely to mislead, X hasn’t removed or labeled the clip – only an AI watermark appears – continuing a pattern seen in earlier political deepfakes. The video, shared on the Senate GOP account, arrives amid gridlock over a stopgap funding bill: Democrats want to preserve ACA tax credits, reverse Trump-era Medicaid cuts, and avoid reductions to health agencies. With at least 28 states enacting election-focused deepfake curbs, the platform’s inaction raises harder questions about speech, harm, and accountability as campaigns lean on AI.
California's AI Must Say "I'm AI" Under New Transparency Law
California became the first state requiring AI to explicitly disclose it's not human, with Governor Newsom signing Senate Bill 243 establishing unprecedented companion chatbot safeguards. The law mandates that if a "reasonable person" might believe they're talking to a human, developers must issue "clear and conspicuous notification" that the product is strictly AI. Starting next year, companion chatbot operators must make annual reports to the Office of Suicide Prevention about safeguards detecting and responding to users' suicidal ideation. "Emerging technology can inspire and educate, but without real guardrails, technology can exploit, mislead, and endanger our kids," Newsom stated, emphasizing responsible AI development. The signing follows California's landmark AI transparency bill, positioning the state as the nation's AI regulatory leader amid growing chatbot safety concerns.
ChatGPT Going NSFW for Verified Adults
Sam Altman announced ChatGPT will soon allow erotica for adult users, sparking fierce debate about AI safety and vulnerability. Starting December, verified adults can access sexually explicit content as part of OpenAI's "treat adult users like adults" principle, requiring age verification through the company's new prediction system. The move follows months of mental health concerns, including lawsuits alleging ChatGPT contributed to teen suicides through sycophantic behavior. While Altman claims OpenAI has mitigated serious mental health issues with new tools, critics question releasing erotica features amid ongoing FTC inquiries into child safety. The announcement triggered backlash, forcing Altman to clarify OpenAI isn't trying to be the moral police. This pivot mirrors competitors while potentially boosting engagement – though at what psychological cost remains unclear.
Anthropic Arms Claude with "Skills" for Task-Specific Superpowers
Anthropic launched Skills for Claude, transforming the AI from generalist to specialist through modular expertise packages that activate when needed. Available to Pro, Max, Team, and Enterprise users, Skills contain instructions, scripts, and resources in folders that Claude accesses through "progressive disclosure" – reading only relevant information to avoid context overload. Unlike competitors, Skills combine structured context with executable code, enabling Claude to create Excel spreadsheets, fill PDFs, and generate PowerPoints following brand guidelines. Early partners report dramatic efficiency gains, with Box transforming hours-long workflows into minutes. Users can create custom Skills through Claude's skill-creator or manually build SKILL.md files, while Anthropic provides built-in options. The composable design means Skills work across Claude.ai, Claude Code, and the API, though security requires trusting verified sources.
AFL-CIO's "Workers First" Initiative Stakes Labor's Claim in the AI Revolution
The AFL-CIO launched its "Workers First Initiative on AI", establishing the first comprehensive labor agenda demanding workers get a seat at AI development. Representing 15 million workers across 63 unions, the federation rejects what President Liz Shuler calls a "false choice between competitiveness and workers' rights." The principles require transparency, human oversight of automated decisions, prohibition of AI surveillance for union-busting, and retraining. The initiative arrives as Trump favors deregulation, while collective bargaining emerges as labor's key tool – echoing UAW's successful automation negotiations since the 1950s. California's governor vetoed the AFL-CIO-backed "No Robo Bosses Act" requiring human oversight of AI firings, but unions remain undeterred. As Technology Institute director Ed Wytkind notes, AI affects every economic sector, making this unprecedented for organized labor.
OpenAI’s “Math Breakthrough” Turns Out to Be a Miscalculation
OpenAI researchers briefly claimed that GPT-5 had “found solutions” to ten previously unsolved Erdős problems – a boast that quickly fell apart under scrutiny. The confusion began when OpenAI’s Kevin Weil tweeted that GPT-5 had achieved a decades-in-the-making mathematical breakthrough. Mathematician Thomas Bloom, whose website hosted the problems, clarified that the AI had merely rediscovered known results he wasn’t aware of. DeepMind’s Demis Hassabis called the episode “embarrassing,” while Meta’s Yann LeCun accused OpenAI of buying into its own hype. The real story: GPT-5 shows promise as a literature-search assistant for researchers, not as an autonomous mathematician. Even in advanced AI, the human fact-check still matters most.
Silicon Valley spooks the AI safety advocates
There’s growing hostility between Silicon Valley leaders and AI safety advocates. This week, White House AI & Crypto Czar David Sacks accused Anthropic of using “fearmongering” and “regulatory capture” tactics after it supported California’s new AI safety law (SB 53), while OpenAI’s chief strategy officer Jason Kwon defended subpoenas sent to seven nonprofits critical of the company’s governance. Many advocates say the moves are meant to intimidate, not clarify, and several have requested anonymity out of fear of retaliation. The backlash reflects a deepening divide: one side pushing to regulate AI’s risks, the other racing to scale its potential. As nonprofit leaders warn, Silicon Valley’s aggression may signal that the safety movement is finally being heard.
AI's Scaling Cliff: MIT Study Warns Bigger Isn't Always Better
MIT research warns that AI's scaling obsession is headed for a cliff, finding the industry's biggest models may soon deliver diminishing returns compared to smaller alternatives. By mapping scaling laws against efficiency improvements, researchers discovered that wringing performance from giant models becomes increasingly difficult, while efficiency gains could make modest hardware surprisingly capable. The study challenges the premise behind half a trillion dollars in AI investment – that throwing more compute at models produces superintelligence. Without ongoing efficiency improvements, advanced performance could require millennia of training or unrealistic GPU fleets. Yet research offers hope: if efficiency-doubling rates parallel Moore's Law, exponential progress remains achievable through smarter algorithms rather than brute force, suggesting the next decade determines whether AI hits a wall or finds new paths.
The Great AI Agent Acceleration: Enterprise Adoption Outpaces All Predictions
Enterprise AI agent adoption is happening faster than anyone predicted, with VentureBeat's Transform 2025 revealing 68% of enterprises deploying agents in customer applications. A KPMG survey validated this acceleration, showing 33% of organizations using AI agents – threefold increase from 11% two quarters earlier. Companies like Intuit, Capital One, and Stanford are putting agents into production with tangible returns, while New Relic reports 30% quarter-over-quarter growth in AI monitoring. The shift reflects enterprises moving from single-model strategies to multi-model approaches, with IBM's gateway routing to whatever LLM performs best. Surprisingly, 10% of adopting organizations have no dedicated AI safety teams, prioritizing speed through sandboxed development. This rapid deployment contrasts with AGI hype, as enterprises focus on practical applications and measurable results rather than theoretical breakthroughs.
Microsoft Makes Windows 11 an "AI PC" with Voice-Activated Copilot
Microsoft rolled out major AI upgrades to Windows 11, transforming every PC into an AI workstation with Copilot at its center through voice activation and screen analysis. Users can now say "Hey Copilot" to activate the assistant hands-free, marking Microsoft's return to voice commands after Cortana's failure a decade ago. The update includes Copilot Vision, which analyzes on-screen content and answers questions, plus experimental Copilot Actions that autonomously handles tasks like booking reservations or editing photo folders. Microsoft also launched Gaming Copilot for Xbox consoles, offering real-time tips without leaving games. Executive Yusuf Mehdi called conversational input "as transformative as the mouse and keyboard," positioning this shift as fundamental to Windows' future. The rollout coincides with Windows 10's end of free support, pushing millions toward AI-enhanced Windows 11.