In the Loop: Week Ending 6/28/25

Last Week in AI: Apple, Jobs & Environment 

Last week, the scale of AI progress was matched only by the scale of questions it raised. Apple and Meta made bold moves toward AI-first futures, while Salesforce touted efficiency gains with a human cost. From classroom shortcuts to carbon emissions and deepfake regulation, the message was clear: as AI gets smarter and faster, our job is to stay thoughtful, transparent, and human in the loop.


Apple May be Eyeing Perplexity AI

Apple-in-talks-to-acquire-Perplexity-ftr-760x570In a move that could reshape its AI strategy, Apple is reportedly exploring a potential acquisition of Perplexity AI, an emerging player in conversational search. With Perplexity’s answer-focused interface and real-time sourcing, this integration could give Siri a serious upgrade – and position Apple to compete with Microsoft and Google in the race to own the AI-native search experience. While no deal is confirmed, the interest alone signals Apple’s deeper commitment to embedding useful, intuitive AI in its ecosystem – especially at a time when user experience and trust are paramount.


Zuckerberg Wants Meta to Go All-in on AI

681b7568a466d2b74ab544f0Apple isn't the only company potentially interested in buying Perplexity. Apparently Meta is too; part of an aggressive push by Mark Zuckerberg to remake Meta into an AI-first company, according to a deep dive from the New York Times. From Instagram content creation to WhatsApp assistants to the metaverse, generative AI is being stitched into Meta’s core platforms. The goal? Drive engagement, reduce costs, and future-proof the business. But the shift isn’t just technical – it’s cultural. Internally, teams are being reorganized around AI. Externally, Meta is racing to redefine itself as a leader in applied AI. The metaverse may be on hold, but the intelligence layer is very much in play.


Anthropic Launches “Economic Futures Program” to Study AI’s Job Impact

Anthropic-economic-futures-programAnthropic introduced its Economic Futures Program – a new initiative funding research into AI-driven labor-market changes and proposing policies to prepare for disruption. With models forecasting that AI could displace up to half of entry-level white-collar jobs within five years, the program funds grants, builds data infrastructure, and seeks public dialogue on balancing AI’s promise with its socioeconomic risk. By funding this type of self-reflection, Anthropic is stepping beyond headlines to steward real-world impacts – modeling a more proactive approach to responsible AI deployment.


When AI Makes Research Too Easy

AA1HtrPGAccording to a new LendingTree report covered by MSN, students using large language models for research may be doing less actual learning. LLMs offer speed and convenience – but also create a kind of intellectual fast food, giving answers without requiring inquiry. Educators warn this could lead to surface-level understanding and a decline in critical thinking skills. As generative AI becomes ubiquitous in classrooms and knowledge work, we’re reminded: the goal isn’t to think less. It’s to think better, with AI as a tool for exploration – not a shortcut to answers.


ChatGPT Learns to be MeanSort Of

OpenAI-ChatGPTIf you've read about my experiences with MattGPT (Farm bros vs tech bros and "You don't have to unstick me"), then you know that AI can sometimes seem to have a mind of its own. So what happens when you ask ChatGPT to be rude? Gizmodo ran the experiment, prompting it to offer mean comments on users’ photos. The responses were snarky – but carefully measured. No hate speech. No cruelty. Just enough bite to show boundaries are flexible, but present. The exercise raises deeper questions about AI personality and moderation. As more people treat chatbots like companions, how much emotion – positive or negative – should be baked in? And who decides what counts as “too far”? In the age of synthetic speech, tone matters.


The Carbon Shadow of Smart Machines

ai-pollution-carbon-energyAs AI models grow more capable, they also grow more power-hungry. A new report covered by Futurism finds that the energy needed to run LLMs can exceed that of cross-country flights—on a per-query basis. The emissions, largely tied to data center power and cooling, raise urgent questions about AI’s environmental cost. For companies touting digital transformation, this is a wake-up call: responsible AI isn’t just about ethics or alignment – it’s about energy. Optimization, model efficiency, and clean infrastructure aren’t “nice-to-haves” – they’re table stakes for sustainable innovation.


Google Open-Sources a CLI-native AI Agent

Gemini_CLI_Hero_Final.width-2200.format-webpIn a move sure to excite developers, Google unveiled Gemini CLI, an open-source AI agent built for command-line workflows. Designed to automate everything from scripting tasks to debugging code, the tool reflects a broader shift: AI is moving from flashy demos to practical, embedded workflows. Open-source access means developers can adapt the agent to their own needs – something proprietary assistants often can’t offer. It's a reminder that the future of AI isn’t just chatbots; it's quiet, useful automation woven into the tools we already trust.


Salesforce Leans Hard into AI – Cuts Jobs

108090623-1737563202211-ZenwU7-oMarc Benioff says AI is now doing “30 to 50 percent” of the work at Salesforce, a claim made during a CNBC interview that coincided with 1,000 layoffs. It’s a stark illustration of the tension between productivity and employment in the age of generative AI. While automation is framed as a boost for efficiency, the human toll is becoming harder to ignore. For companies championing AI transformation, the message is clear: do it transparently, do it ethically, and don’t conflate cost-cutting with innovation.


Denmark Cracks Down on Deepfakes

popeIn a major move for synthetic media regulation, Denmark passed a law requiring AI-generated images and videos to be labeled and strengthening copyright protections for likenesses. The legislation also empowers creators to challenge unauthorized deepfakes of their faces or voices. As the line between real and fake blurs, Denmark’s approach offers a roadmap for balancing creative freedom with individual rights. For marketers, media companies, and platforms, the signal is clear: deepfake governance isn’t just coming – it’s already here.


Congress Considers 10-year Moratorium on State AI Laws

GettyImages-1246479507A proposal in Congress could block state and local AI regulations for a decade – including California’s training-data disclosure rules and Tennessee’s ELVIS Act – by using a federal “moratorium” clause within a GOP mega–budget bill. Backed by influential figures like Sam Altman, this move intends to prevent a regulatory patchwork amid the U.S.–China AI race. But it’s drawing sharp criticism from Democrats, consumer advocates, and labor groups who argue it curtails state power to protect citizens from AI harms. This debate spotlights the tension between national innovation and localized safeguards.


 

More Loop Insights