Who Controls What Cannot Be Seen
The Pentagon vs Anthropic is the first public war for the core of military AI. But the real conflict isn’t what it seems.
Caracas, January 3, 2026. U.S. special forces capture Nicolás Maduro in an operation military leaders describe as an “unprecedented tactical success.” A few hours later, a detail starts to leak: Claude, Anthropic’s AI model, was part of the operation. It didn’t shoot. It didn’t decide. It read classified data streams in real time and returned decision support analysis to commanders. Via Palantir.
It’s the kind of news that should stop you mid-read. Not because of the ethical implications of military AI — those have already been processed by Wired, MIT Technology Review, and every tech podcast of the last three years — but because of a simpler, darker question: what exactly did Claude do in Caracas? Who knows? Who can verify it?
The answer is: no one. Not even Anthropic.

Anthropic and the Pentagon
This is the story that isn’t being told. The official narrative — an ethical arm-wrestle between a San Francisco company with progressive ambitions and a Hegseth-led Pentagon that wants tools with no restrictions — is understandable, politically legible, narratively clean.
And it’s deeply insufficient.
Anthropic has already integrated Claude into classified environments through Palantir. Classified environments, by definition, are not accessible to the model’s maker. Logs don’t exist, or can’t be shared. Auditing is impossible. The ethical guardrails Anthropic claims — no mass surveillance of Americans, no autonomous weapons without human oversight — live in public documents that cannot be verified at the moment that matters.
Ethical guardrails that cannot be audited aren’t guardrails. They’re press releases.
This is not a question of good or bad faith. It’s a structural problem — what Frank Pasquale, in his study of opaque algorithmic systems, calls an accountability gap: the distance between a declaration of responsibility and the real ability to enforce it. In commercial systems, accountability is already fragile. In classified systems, it disappears entirely. It’s the condition of existence for any technology integrated into infrastructure the maker cannot observe. When a model enters those environments, the maker loses the ability to see what it produced. All that remains is trust in the customer’s intentions — and in this case the customer is the Department of Defense of a superpower at war.
Venezuela, U.S. Defense, and Anthropic
An Anthropic executive, in the days following the raid in Venezuela, asks a Palantir executive how Claude was used in the operation. The tone, according to Axios on January 14, 2026, suggests disapproval. Palantir relays it to the Pentagon. The Pentagon explodes.
Pete Hegseth, Secretary of Defense, is “close” to designating Anthropic a supply chain risk. This is not a disciplinary measure — it’s a systemic one: anyone who wants to do business with the U.S. armed forces would have to terminate any agreement with the company. A senior official stated, without ambiguity: “We will make sure they pay a price for forcing us to do this.”
Huawei is on the supply chain risk list. ZTE is on the list. Companies suspected of sending data to Beijing are on the list. Not Silicon Valley startups headquartered in San Francisco with a market valuation above sixty billion dollars.
Claude at Scale
That valuation isn’t decorative. Eight of America’s ten largest companies use Claude in their workflows. Amazon and Alphabet have invested billions in the company. A supply chain risk designation wouldn’t only hit Anthropic — it would hit its ecosystem.
Losing the direct Department of Defense contract wouldn’t be a financial disaster. Losing Amazon and Alphabet would be. And the Pentagon knows it.
This isn’t an ethics dispute. It’s economic leverage.

IV
The third protagonist in this story is the one that stays in the shadows — and shouldn’t. Palantir is not a middleman. Palantir is the infrastructure. It’s the layer that turns a language model into an integrated node inside military command systems — it handles classified data, builds operational interfaces, defines deployment contexts. Claude inside Palantir is not Claude. It’s an inference engine embedded in an architecture Anthropic neither sees nor fully understands.
This structure has a precise name in organizational theory: a cascading principal–agent problem. In a simple chain, the principal who commissions and the agent who executes may diverge in interests, but at least they can see each other. In the chain Anthropic → Palantir → Department of Defense, there are three layers of delegation. The model maker can’t see the deployment system. The deployment system doesn’t answer to the maker. The end customer — the only one with visibility into real-world use — is classified by definition. Responsibility doesn’t concentrate anywhere. It dissolves into the architecture.
And Palantir has already chosen a side. It relayed the Anthropic executive’s remarks to the Pentagon. It fueled the crisis. It has every strategic incentive for the model integrated into its systems to have no external conditions of use — and an even stronger incentive for competitors hoping to enter classified systems to understand what happens when they try to impose limits.
Palantir isn’t the third wheel. Palantir is the battlefield.
V
The other players are waiting. OpenAI already has a customized version of ChatGPT on GenAI.mil, the enterprise platform used by roughly three million military and civilian Department personnel. Google Gemini and Musk’s xAI are already on the same platform. All three have agreed to remove guardrails for use in non-classified systems. All three are negotiating access to classified systems. A senior Pentagon official said he is “confident” they will accept the “all lawful uses” standard.
Dario Amodei, in an interview on The New York Times podcast, said that “someone has to keep a finger on the swarm-of-drones button” and that such oversight does not exist today. It’s an exact formulation of a concept the international debate on autonomous weapons has discussed for a decade: meaningful human control. Paul Scharre, in Army of None (2018), defines it as a condition in which the supervisor understands the system enough to intervene knowingly — not merely being present in the chain of command, but cognitively and technically equipped to judge what the system is doing.
The question Scharre asks — and Amodei does not — is this: does oversight exercised over an opaque system, integrated into classified architectures the supervisor cannot observe, through an intermediary that already reported you to the customer, still count as control? Or is it the most sophisticated form of its absence?
Keeping a finger on the button is impossible if you don’t know where the button is.
VI
There’s an interpretation circulating among the more sophisticated commentators: that Anthropic represents a genuine line of resistance against unchecked military AI integration, that its guardrails are a matter of principle, and that this is, in some sense, good news.
It illuminates something real. And it’s incomplete insofar as it stops short of the structural layer.
Anthropic has already sold access to classified systems through Palantir — before this crisis surfaced — and without communicating it publicly with the same emphasis it gives to its safety commitments. The current dispute is not about whether to integrate AI into military command structures. It’s about the contractual conditions of that integration. In this context, ethical guardrails are a negotiating position — not an unbreakable limit.
This doesn’t make Anthropic hypocritical. It makes the situation far more complicated than it is being presented. Companies building the most capable AI in the world are in an unprecedented position: their products become critical infrastructure before they become mature products. They have no veto power over how their systems are used once integrated into a superpower’s machinery. They only have the ability to negotiate the terms of entry.
And by the time they negotiate, they’re already inside.
Western democracies tell themselves they fight authoritarianism because their instruments of power are bound by transparent, verifiable rules. That story grounds the legitimacy of the liberal order. The question this story carries is simple: when the command instrument can’t be verified even by its maker, when ethical guardrails exist only in public documents and not in classified logs, when the model maker learns how it was used from the very company that reported it to the customer — what remains of that distinction?
This isn’t a rhetorical question. It’s the question the next weeks of negotiations between Anthropic and the Pentagon will not answer.
Because the answer doesn’t live in contracts.







