Loop Insights

Why Meta’s AI Power Grab Should Worry You

Written by Matt Cyr | Jun 30, 2025 3:09:03 PM

Mark Zuckerberg has spent the last 20 years subverting Spider-Man’s most famous line. “With great power comes great irresponsibility” might as well be the motto of the man who founded Facebook in a Harvard dorm room.

He has consistently shown that protecting your data is not his priority – despite many carefully worded statements to the contrary. He’s faced intense Congressional scrutiny, paid record-breaking fines, and been the subject of some of the most consequential privacy and ethics scandals of the digital age.

And now, after the metaverse misfire that cost billions and left Meta lagging in the AI race, Zuck is trying to spend his way back into relevance.

He’s buying other AI powerhouses and trying to poach top talent from OpenAI, all while promoting a culture that feels increasingly anachronistic.

I’ve never been a Zuckerberg fan. I’ve largely avoided Facebook, rarely use WhatsApp, and only occasionally scroll through Instagram. Why? Because Zuck seems like the kind of guy who would hug you with one hand while quietly picking your pocket with the other.

Is this really the guy we want in control of artificial super-intelligence? 

The AI Land Grab

Meta is in the middle of a massive AI power play. After falling behind OpenAI, Microsoft, and Google, Zuckerberg is trying to force his way back to the front of the race. His strategy: out-hire, out-build, and out-shout the competition.

The company’s AI recruiting spree has reportedly led to rifts inside OpenAI and sent ripples through the AI research community. Meta is also walking a fine line on transparency – positioning its models like Llama 2 as open source, but retaining controls that restrict use in large-scale commercial systems.

In short, it’s the illusion of openness. And it fits a familiar pattern: make public commitments to the greater good, then quietly pull back when control becomes more valuable than community.

A Pattern of Irresponsibility

If this behavior seems familiar, it’s because it is. Meta has a well-documented history of ethical shortcuts and reactive apologies. Just a few highlights:

  • Cambridge Analytica (2018): Data from 87 million users was harvested without consent for political profiling.
  • $5 Billion FTC Fine (2019): For repeated privacy violations and misleading data practices.
  • Mass Data Leaks (2018–2021): Including a breach that exposed the personal data of 533 million users – without any user notification.
  • Psychological Manipulation (2012–2013): Facebook conducted a large-scale experiment on 70,000 users to manipulate their emotions via the News Feed – without consent.
  • Enabling Disinformation: The platform helped spread content that fueled violence in Myanmar and hate speech in India.

Zuckerberg has apologized. Repeatedly. But little has changed. The pattern is clear: breach trust, offer contrition, repeat.

Culture Drives Code

There’s another layer to worry about: leadership culture. Earlier this year, Zuckerberg told Joe Rogan he wanted more "masculine energy" at Meta and criticized what he described as “culturally neutered” companies. This coincided with Meta gutting its DEI programs and prompted staff backlash and legal complaints – including one from a former employee alleging a “toxic pattern of silencing women.”

This isn’t just about tone at the top. It’s about who gets a seat at the table when we’re building the systems that could soon underlie decision-making in health, education, hiring, and more.

The biases and blind spots of leadership don’t just affect workplace culture. They get encoded into the algorithms themselves.

Why It Matters Now

Meta’s AI ambitions aren’t abstract. They could shape how we write, learn, buy, vote, and govern. And given the company’s history, that should give every marketing leader, policymaker, and citizen pause.

This isn’t a question of capability. Meta has immense technical talent and deep resources. It’s a question of trust – and whether the organization that helped break the internet should now be trusted to rebuild it with thinking machines.

We are at a turning point. AI is quickly becoming the operating system for modern life. And we should be asking – loudly and often – who’s writing the code, who’s setting the rules, and who benefits most from the outcomes.

Because if we get this wrong, it won’t just be another tech scandal.

It’ll be the foundation of our future.