Dario, We’re Not in Kansas Anymore

When the Government Picks an AI Model, We’ve Entered a New Era

It’s been quite a week for Dario Amodei, the CEO of Anthropic. It started last Thursday, when he and Sam Altman from OpenAI awkwardly avoided clasping hands on stage at the AI Impact Summit in India.

The next day, news broke that tensions were growing between Anthropic and the Pentagon over Anthropic’s unwillingness to bend their safety rules in ways the government was asking.

Then on the 24th, Anthropic announced that it was abandoning its industry-leading safety pledge, disappointing many who see the company as a bulwark against the other increasingly revenue-and-power hungry frontier model companies.

Then, yesterday, things really started to heat up with the President announcing a complete ban on government use of Anthropic’s models.

And this morning we got the double whammy of Amodei defending his company’s approach, calling the government’s actions “retaliatory and punitive” and Altman’s OpenAI announcing its own deal with the government.

I don’t know about you, but I miss the good old days of three weeks ago when Anthropic and OpenAI were fighting over Super Bowl ads.

This Wasn’t Just a Crazy Week for Anthropic. It Was a Signal.

But here’s the thing.

This isn’t about bruised egos, safety pledges, or who shook whose hand on stage.

When the U.S. government bans a specific AI model, when Pentagon officials pressure labs to loosen guardrails, when CEOs accuse administrations of retaliation — we’ve crossed a threshold.

AI models themselves are now geopolitical actors.

For the past two years, the AI conversation has largely centered on capabilities: Who has the smartest model? Who’s winning the benchmarks? Who can write better code, summarize faster, reason deeper?

That era is ending.

We’re entering the era of power.

Procurement Is Policy.

Governments aren’t just regulating AI companies. They’re choosing sides. They’re making procurement decisions that shape market winners. They’re exerting pressure on safety frameworks. They’re signaling which models are aligned — politically, strategically, ideologically — with national interests.

When a model becomes the object of state retaliation or endorsement, it stops being “just a product.” It becomes infrastructure.

Electricity is infrastructure. The internet is infrastructure. GPS is infrastructure.

Now AI is infrastructure.

And infrastructure lives inside politics.

The Competitive Era Is Over. The Strategic Era Has Begun.

19vid-ai-hand-holding-COVER-superJumbo

This is why Dario’s week matters. Not because of a hand clasp that didn’t happen. Not because of a safety pledge walked back. But because the frontier labs are no longer operating in a competitive vacuum.

They are operating inside a geopolitical chessboard.

For businesses — and especially for CMOs and senior leaders — this changes the calculus.

Vendor selection is no longer just about performance and price. It’s about durability. Regulatory exposure. Public perception. Alignment risk. Supply chain stability in a world where AI access can be throttled by executive order.

Three weeks ago, we were joking about Super Bowl ads.

Today we’re watching AI companies maneuver like defense contractors.

We’re not in Kansas anymore.

And the organizations that understand that shift — that AI is no longer just a tool but a strategic layer intertwined with government power — will navigate the next phase far more intelligently than those still treating this as a feature comparison exercise.

The AI era didn’t just accelerate.

It matured.

And maturity, as it turns out, is political.

More Loop Insights