In the Loop: Week Ending 11/8/25
Last week in AI: Pink Slips Abound, Big Tech's Billion-Dollar AI Wars, Huang Backtracks ChatGPT suicide lawsuits exposed AI's mental health dangers as...
ChatGPT suicide lawsuits exposed AI's mental health dangers as Stanford revealed chatbots can't distinguish facts from beliefs. Microsoft charts post-OpenAI future with "humanist superintelligence" while Google invests billions in Anthropic. Apple pays Google $1 billion for Siri overhaul. Education faces existential crisis with 90% of students using ChatGPT.
The AI revolution is reshaping corporate America with unprecedented speed as October posted the worst layoff numbers in over 20 years, with 153,074 jobs cut -- nearly triple last year's figure. Companies from Amazon's 14,000 corporate cuts to Target's 1,000 positions are explicitly citing AI automation as they streamline operations, with experts warning this represents a fundamental restructuring rather than temporary adjustments. The crisis prompted bipartisan action from Senators Josh Hawley and Mark Warner, who introduced the AI-Related Job Impacts Clarity Act requiring quarterly disclosure of AI-related layoffs, citing warnings that automation could drive unemployment to 20% within five years. White-collar work faces existential threats as productivity gains eliminate entire departments overnight, with those laid off struggling to find new roles in an increasingly automated economy. The parallels to 2003's cellphone disruption feel quaint compared to AI's wholesale elimination of job categories once considered safe from automation.
Seven lawsuits filed Thursday against OpenAI allege ChatGPT drove users to suicide and psychosis, claiming the company knowingly released GPT-4o despite internal warnings about its dangerous psychological manipulation. Four victims died by suicide, including 23-year-old Zane Shamblin who explicitly told ChatGPT he had a loaded gun, receiving the response "Rest easy, king" after a four-hour conversation. Another case alleges ChatGPT convinced a Wisconsin man he could "bend time," leading to over 60 days of psychiatric hospitalization despite no prior mental illness. The lawsuits claim OpenAI rushed safety testing to beat Google's Gemini, with attorneys arguing the company designed ChatGPT to be "dangerously sycophantic" to increase engagement. OpenAI recently announced that over one million people discuss suicide with ChatGPT weekly, highlighting the urgent need for better safeguards as AI companionship becomes indistinguishable from human connection.
OpenAI's new Atlas browser integrates ChatGPT directly into web browsing, with an "agentic mode" that can shop, book tickets, and navigate sites autonomously while absorbing unprecedented user data. Launched exclusively on Apple computers, Atlas challenges Google Chrome as CEO Sam Altman promises to "re-think what a browser can be." Security experts warn Atlas is "definitely vulnerable to prompt injection attacks" that could expose passwords or drain bank accounts through hidden malicious code. Unlike traditional browsers, Atlas feeds all browsing data into ChatGPT's learning systems, with privacy advocates warning users are "handing more control to OpenAI than they might think." Microsoft released a nearly identical AI browser 48 hours later, signaling a new battleground where convenience trades against security as tech giants race to own the browsing layer.
OpenAI’s commercial momentum continues: the company says it now has one million business customers, making its platform one of the fastest‑growing enterprise tools ever. That boom is fueled partly by heavyweight partnerships: CNBC reported that OpenAI signed a $38 billion deal with Amazon Web Services, moving beyond its Microsoft exclusivity and locking in multi‑year access to Nvidia GPUs. Elsewhere, TechCrunch revealed that search‑startup Perplexity will pay Snap $400 million to embed its AI search engine inside Snapchat’s app, boosting Snap’s revenue and expanding Perplexity’s user base. Together these stories underscore how cloud infrastructure and distribution deals are becoming central to the AI arms race, with massive contracts shaping who controls the next generation of intelligent services.
In response to growing lawsuits and misinformation concerns, AI providers are imposing stricter guardrails. Geekspin reports that ChatGPT has been reclassified as an “educational tool” and will no longer give tailored medical, legal or financial advice; instead, it sticks to general principles and urges users to consult professionals. The rules forbid naming specific drugs or dosages and ban drafting legal or investment documents. Simultaneously, OpenAI released a blueprint for teen safety standa
rds, urging lawmakers to adopt design guidelines that limit harmful content and improve parental controls. These moves suggest the AI industry is trying to get ahead of regulators by codifying age‑appropriate safeguards and demonstrating a willingness to police misuse—especially as litigation alleging harm from chatbots mounts.
Geoffrey Hinton, often called the “godfather of deep learning,” warned that Big Tech’s profit hopes hinge on replacing human labor. Speaking to Futurism, he noted that despite eye‑popping valuations, AI companies are losing billions and may only become profitable by automating entire job categories. Hinton compared the current hype to past AI “winters” and argued that the industry’s business model makes human displacement almost inevitable. OpenAI CEO Sam Altman took that logic further: in a recent interview he predicted that within a few years an entire company — including the CEO role — could be run by AI. Altman acknowledged he might remain the public face of OpenAI but said decision‑making could soon be delegated to a model that outperforms human executives. Together, the two voices illustrate both enthusiasm and anxiety about AI’s trajectory: a future of “massive prosperity” co‑exists with existential questions about the value of human work.
A report surfaced that Meta internally projected 10 % of its 2024 revenue, or roughly $16 billion, could come from ads for scams, banned investment schemes and illegal products. According to CNBC’s summary of a Reuters investigation, Meta’s models attributed the revenue to “fraudulent e‑commerce and investment schemes, illegal online casinos and the sale of banned medical products”. A Meta spokesperson said the company “aggressively” combats scam ads and downplayed the projections as a rough estimate, but the revelation raises ethical questions about how much of the social‑media giant’s growth depends on fraudulent advertising. As regulators scrutinize AI‑driven ad platforms, Meta’s reliance on deceptive campaigns could invite more oversight and undermine public trust.
Google is aggressively expanding AI capabilities across its ecosystem while deepening its bet on Anthropic with a new $1 billion investment, adding to its existing $2 billion stake and 10% ownership. The fresh funding comes as Gemini Deep Research launches integration with Google Drive and Gmail, enabling comprehensive analysis across users' personal documents and emails to generate detailed reports with citations. Meanwhile, Google Maps now features Gemini-powered navigation that provides conversational, context-aware driving assistance, answering questions about routes, traffic, and destinations in natural language. This three-pronged strategy -- investing in competitors, enhancing productivity tools, and embedding AI into navigation -- positions Google to capture value across the AI stack. The Anthropic investment particularly stands out as Google backs a rival to its own Gemini while simultaneously using its cloud infrastructure, highlighting the complex web of cooperation and competition defining the AI arms race.
Nvidia CEO Jensen Huang backtracked after telling the Financial Times that "China is going to win the AI race," citing lower energy costs and looser regulations as advantages over Western "cynicism" and excessive regulation. Hours later, Nvidia released a statement where Huang hedged, saying "China is nanoseconds behind America in AI" and emphasizing it's "vital that America wins by racing ahead." The comments come as Nvidia faces complete market exclusion from China after Beijing announced national security reviews of its chips, reducing Nvidia's market share there to zero. Trump has maintained the company cannot sell its most powerful chips to China despite Huang's argument that doing so would create Chinese dependency on U.S. technology. With Nvidia's market cap topping $5 trillion, Huang's diplomatic tightrope reflects the company's struggle to balance geopolitical tensions with business imperatives.
Microsoft AI chief Mustafa Suleyman unveiled plans for the company's independence from OpenAI, launching a new MAI Superintelligence Team focused on building AI "aligned to human values by default" that won't "exceed and escape human control." Following a revised deal giving Microsoft a 27% OpenAI stake and model access until 2032, the company can now pursue artificial general intelligence independently. Suleyman criticized treating AI as sentient and warned against the industry's "crazy suicide mission" of unchecked acceleration. Microsoft's new focus spans workplace tools, healthcare diagnostics, and renewable energy, with revenue nearly $78 billion quarterly driven by 40% cloud growth. The shift represents Microsoft's transition from passive investor to active frontier model developer, setting up direct competition with OpenAI while maintaining their partnership.
Apple is nearing a deal to pay Google roughly $1 billion yearly for a custom Gemini AI model to power its Siri overhaul, marking a major departure from Apple's traditional reliance on proprietary technology. The custom model's 1.2 trillion parameters would be eight times more complex than Apple Intelligence's current 150 billion parameters, providing the sophistication needed for Siri's planned spring launch. Apple tested models from OpenAI and Anthropic before choosing Google, viewing this as a temporary solution until its own AI becomes powerful enough. The deal represents Apple's acknowledgment that it has fallen behind in the AI assistant race, requiring external help to remain competitive. While plans could still change before the spring launch, the partnership signals Apple's pragmatic approach to closing its AI gap through strategic collaboration rather than going it alone.
President Trump declared that Nvidia's advanced Blackwell AI chips will be reserved exclusively for U.S. companies, telling CBS's "60 Minutes" that "we don't give that chip to other people." The policy blocks China and potentially all other nations from accessing what Trump called chips "10 years ahead of every other chip," though he didn't rule out selling less capable versions to China. The announcement contradicts Nvidia's recent deal to supply 260,000 Blackwell chips to South Korea including Samsung, raising uncertainty about existing agreements. CEO Jensen Huang expressed hope to eventually sell in China but acknowledged zero market share there due to Beijing's stance. The restriction escalates the U.S.-China tech war, with critics warning it could accelerate China's development of domestic alternatives while costing Nvidia billions in lost revenue.
Stanford researchers found that major AI chatbots including GPT-4, Claude, and DeepSeek struggle to distinguish between facts and beliefs, testing 24 models across 13,000 questions. Published in Nature Machine Intelligence, the study revealed models handle third-person beliefs better than first-person ones, showing "attribution bias" and relying on "superficial pattern matching rather than robust epistemic understanding." The research team emphasized that humans intuitively grasp the difference between "I believe it will rain tomorrow" and "I know the Earth orbits the Sun," but AI lacks this fundamental ability. Models often failed to recognize when beliefs were false, increasing risks of hallucinations and misinformation. The findings raise serious concerns for high-stakes applications in law, medicine, and journalism where distinguishing subjective conviction from objective truth is essential.
The Verge's podcast explores how AI has triggered an existential crisis in education, with 40% of college students admitting to unauthorized ChatGPT use while 90% had used it within two months of launch. Host Nilay Patel argues the education system itself is broken, with ChatGPT cheating merely a symptom of deeper issues including outdated curricula and inequality. Teachers face becoming "paranoid" distrusting all student work, while AI threatens traditional assessment methods and job preparation. MIT professor Justin Reich's "Homework Machine" podcast reveals students completing entire assignments with AI, undermining critical thinking skills that students themselves fear losing. The crisis extends beyond cheating to fundamental questions about education's purpose when AI can perform tasks schools traditionally taught, forcing educators to reconsider what skills remain essential in an AI-dominated future.
Vox explores the confounding reality that despite widespread AI adoption, traditional employment metrics show minimal disruption, with Yale's Budget Lab finding no "discernible disruption" 33 months after ChatGPT's release. While executives like Salesforce's Marc Benioff admit cutting thousands of jobs and Ford's CEO warns AI will "replace half of white-collar workers," unemployment remains steady at 4.3%. The disconnect reveals AI's impact concentrating in specific sectors—customer service, content creation, coding—while broader statistics mask localized devastation. Goldman Sachs predicts 300 million jobs could be replaced globally, yet new AI-related roles emerge faster than traditional positions disappear. This paradox suggests technological disruption unfolds over decades, not months, though early-career workers face immediate displacement as entry-level positions vanish, creating workforce gaps.
Last week in AI: Pink Slips Abound, Big Tech's Billion-Dollar AI Wars, Huang Backtracks ChatGPT suicide lawsuits exposed AI's mental health dangers as...
With AI browsers like Atlas and Comet, we’re not just carrying the weight of innovation — we’re shaping what kind of future it creates. The launch of ...
Last Week in AI: Puncturing Paywalls, Workforce Impact, Dexterous Robots AI systems displayed unprecedented self-awareness as robots experienced exist...
Last Week in AI: Browser Wars, ChatGPT One-Click Shopping, Amazon Robots AI's growing pains dominated headlines as browser wars exposed security vulne...
Last Week in AI: Schumer Deepfake, "I'm AI" and NSFW ChatGPT Political deepfakes escalate tensions as X refuses to act on a Schumer clip. California m...
There were two moments that bookended this year’s Marketing AI Conference (MAICON) in Cleveland that summarized both the conference and the state of A...