
Anthropic’s Claude stands, by most credible accounts, as the most capable AI model currently operating in classified military systems. The Pentagon reportedly used it in the January 2026 capture of Venezuelan President Nicolás Maduro. Eight of the ten largest American corporationsrely on it. And yet the company that builds this tool now presumes to tell the Department of War when and how the military may employ it in defense of the United States.
This is an extraordinary claim. It deserves an extraordinary response.
The facts of the dispute are by now widely reported. The Pentagon asked Anthropic and three other major AI laboratories (OpenAI, Google, and xAI) to permit military use of their models for “all lawful purposes.” The other three agreed to remove their safeguards for unclassified military work, and at least one accepted the broader standard. Anthropic alone insists on maintaining two carve-outs: a prohibition on mass surveillance of American citizens and a prohibition on fully autonomous weapons systems. The company frames these restrictions as matters of conscience. The Pentagon frames them as an unworkable constraint on operational flexibility. Both sides state their positions accurately. But only the Pentagon holds the constitutional authority to make the call.
The Institutional Question
To understand why this dispute matters, and why Secretary Hegseth is right to escalate it, we must return to the foundational question of what the military is. In The Soldier and the State, Samuel P. Huntington established the principle that the Armed Forces exist as a professional institution set apart from civilian society, governed by values and imperatives necessarily distinct from those of the liberal democratic order they defend. The military serves a singular purpose: to fight and win the nation’s wars. Every other consideration (political, social, ideological) fallssubordinate to that mission.
This principle has been under assault for decades. As I have argued elsewhere, the military stood among the first American institutions formally captured by the group quota regime. Successive administrations rewrote its organizing principles to serve demographic goals rather than warfighting excellence. The DEI revolution in the military succeeded because of a simple conceptual error: the belief that the Armed Forces are just another institution in civil society,bound by every social norm and political fashion of the country they exist to defend.
The Anthropic dispute represents a new variation on this old error. Where the DEI regime imposed the social priorities of the progressive left onto military personnel policy, the AI safety regime now threatens to impose the technological preferences of Silicon Valley engineers onto military operations. The mechanism differs. The underlying logic remains identical: civilians who have never served in combat and who bear no responsibility for the consequences of military failure presume to define the conditions under which America’s warfighters may do their jobs.
The Engineer’s Veto
Former OpenAI researchers founded Anthropic after departing, in part, over concerns about what they viewed as insufficient attention to AI safety. The company built its brand and its recruiting pipeline around the promise of “responsible” AI development. Its leadership includes talented scientists who think deeply about catastrophic risk scenarios. Axios reports that the company must also navigate internal disquiet among its engineers about working with the Pentagon. None of this is contemptible in itself.
But a company that voluntarily enters the defense market accepts a different set of obligations than one that sells productivity software to accountants. When Anthropic signed a contract valued at up to $200 million with the Department of War, when it became the first AI model to operate inside the military’s classified networks, it assumed a relationship with the most consequential institution in the American republic. That relationship carries a non-negotiable condition: the democratically accountable civilians who lead the Department of War decide how the tools of war are used. The engineers who build them do not.
This is not a novel principle. Lockheed Martin does not get to decide which targets an F-35 may strike. Raytheon does not impose terms of use on a Tomahawk missile after it leaves the factory. General Dynamics does not retain a veto over how a combatant commander employs an Abrams tank. The defense industrial base has always operated on the understanding that once a capability reaches the warfighter, command authority governs its employment, shaped by law, policy, and the judgment of accountable leaders, not the preferences of the manufacturer.
Anthropic’s position breaks radically from this norm. The company asserts what amounts to an ongoing right of refusal, an engineer’s veto, over the operational use of a capability already deployed inside the most sensitive military systems in the world. If this precedent stands, it will not end with Anthropic. Every technology company that sells to the Department of War will claim the power to impose its own moral framework on military operations. The result will not be a more ethical military. It will be a chaotic, unaccountable oligarchy in which the preferences of unelected engineers override the decisions of democratically accountable officials at every level of the chain of command.
The Nature of the Threat
Anthropic’s stated concerns (mass surveillance of Americans and fully autonomous weapons) are not trivial. Legitimate legal and ethical questions surround both issues. But Congress, the courts, and the elected civilian leadership of the Department of War must answer those questions. A private company cannot resolve them by embedding its preferred policy outcomes into the technical architecture of a military system.
An Anthropic official told Axios that existing surveillance law has not kept pace with AI capabilities, that current statutes do not contemplate the scale at which AI can process publicly available information. This is probably true. It is also irrelevant to the question of whether a defense contractor should make law. If a gap exists in the statutory framework, legislation must close it. A terms-of-service agreement negotiated between corporate lawyers and Pentagon procurement officials cannot serve as a substitute.
The operational realities Anthropic creates further undermine its position. Claude currently operates as the only frontier AI model inside the military’s classified networks. By insisting on restrictions that no other major lab has imposed, the company has made itself a single point of failure in the Department’s most sensitive AI capabilities. When an Anthropic employee reportedly contacted Palantir after the Maduro operation to ask whether Claude had been used, raising what Pentagon officials described as concerns about operational approval, the company demonstrated precisely why the engineer’s veto cannot stand. A technology vendor cannot retroactively audit military operations for compliance with its corporate ethics policy.
What China Understands
The strategic dimension of this dispute demands attention. In his AI Acceleration Strategy, Secretary Hegseth framed AI integration as a race and declared the Department of War will become an “AI-first warfighting force.” At SpaceX’s Starbase in Texas, he announced that the Pentagon is “done running a peacetime science fair while our potential adversaries are running a wartime arms race.” He is right. The People’s Liberation Army does not contend with an engineer’s veto.
Beijing’s approach runs in precisely the opposite direction. The Pentagon’s own annual report to Congress on Chinese military developments warns that Beijing’s commercial and academic AI sectors narrowed the performance gap with leading U.S. models throughout 2024. Georgetown’s Center for Security and Emerging Technology documented how China’s Military-Civil Fusion strategy has turned civilian AI companies into direct suppliers of PLA capabilities, with the majority of AI-related military procurement contracts now going to private firms rather than state-owned defense enterprises. The Jamestown Foundation reports that the PLA rapidly adopted DeepSeek’s generative AI models in early 2025 and now uses them across intelligence, surveillance, and reconnaissance functions. CSET’s February 2026 analysis of thousands of PLA procurement documents confirms that China pursues AI-enabled capabilities across all domains, from decision support systems to sensor enhancement to data fusion algorithms.
Chinese AI companies do not negotiate terms-of-use restrictions with the Central Military Commission. As the Foreign Policy Research Institute observed, Military-Civil Fusion operates as an ecosystem designed from the top down: state-guided, system-wide, with universities, labs, and a steadily expanding web of dual-use vendors mobilized for defense priorities while Beijing systematically breaks down contracting barriers. The 15th Five-Year Plan currently in formulation will institutionalize this fusion as the primary mechanism for defense modernization through 2030. No Chinese AI company retains a veto over how the PLA employs its tools.
This does not mean the United States should abandon its commitment to the rule of law or to the principle that military operations must comply with domestic and international legal obligations. It means that the locus of those decisions must rest where the Constitution places it: with the elected commander-in-chief, the Senate-confirmed secretary of defense, and the uniformed officers who bear personal legal responsibility for the lawfulness of military operations. An AI company’s acceptable use policy cannot substitute for the laws of armed conflict, the Uniform Code of Military Justice, or the oversight of the United States Congress.
Every month that the Pentagon spends negotiating with a contractor over permission to use a deployed capability is a month that China does not waste. The competitive dynamic forgives nothing. If the United States military cannot use its own tools without permission from the companies that built them, we will lose the AI competition, and the wars that competition exists to prevent, not because we lacked the technology, but because we lacked the will to use it.
The Case for Sanctions
Secretary Hegseth’s reported consideration of designating Anthropic a “supply chain risk” is severe. A senior Pentagon official told Axios that the Department will “make sure they pay a price for forcing our hand.” The designation would require every company doing business with the Department of War to certify that it does not use Anthropic tools in its own workflows, a massive disruption given Claude’s ubiquity in the corporate landscape. Chief Pentagon spokesman Sean Parnell stated: “Our nation requires that our partners be willing to help our warfighters win in any fight.”
It is also entirely proportionate to the threat. A technology vendor that embeds itself in classified military systems and then asserts the right to constrain how those systems operate is, by definition, a supply chain risk. The risk does not take the form of espionage or sabotage in the traditional sense. It takes the form of operational unreliability: the possibility that a critical capability will be withdrawn, degraded, or subjected to after-the-fact review based on the moral preferences of people who bear no responsibility for the consequences of military failure.
The broader defense industrial base watches closely. OpenAI, Google, and xAI all negotiate with the Pentagon over similar terms. A senior administration official confirmed that the Pentagon uses the Anthropic confrontation to set the tone for those negotiations. This approach is correct and necessary. If Anthropic maintains its carve-outs without consequence, every other AI company will pursue the same. The result will be a patchwork of corporate vetoes overlaid on military operations, with no two vendors offering the same permissions and no combatant commander able to rely on the tools at his disposal.
The Department of War should set a clear and uniform standard. Secretary Hegseth’s AI Acceleration Strategy already mandates that the undersecretary for acquisition and sustainment incorporate standard “any lawful use” language into all AI procurement contracts within 180 days. Companies that sell capabilities to the military agree to their use for all lawful purposes, as determined by the constitutionally accountable chain of command. Companies that cannot accept this standard remain free to serve the commercial market. They do not remain free to serve the commercial market and the defense market on their own terms.
A Question of Sovereignty
At its core, the Anthropic dispute poses a question of sovereignty. Who decides how the instruments of national defense are employed? The answer, under the Constitution and under two and a half centuries of American civil-military relations, is clear: the people, through their elected representatives and the officials those representatives appoint. Not the officer corps, which is why we maintain civilian control of the military. Not the defense contractors, which is why procurement authority rests with the government. And not the engineers of Anthropic, however brilliant, however well-intentioned.
The progressive left spent sixty years remaking the military in the image of its social priorities, from McNamara’s Project 100,000 to Hicks’s Strategic Management Plan. Conservatives who are serious about reclaiming the institution cannot afford to let a new class of unaccountable actors, the AI safety establishment, impose a different set of constraints on the same warfighters. The mechanism is new. The threat to military effectiveness and democratic accountability remains the same.
Secretary Hegseth should make clear, by action and not merely by rhetoric, that the Department of War will not accept an engineer’s veto over military operations. If Anthropic cannot meet the standard required of a defense partner, the Pentagon must pay the short-term cost of disentanglement to establish the long-term principle that the American military answers to the American people, and to no one else.