What Conservatives Should Learn From the Anthropic Dispute

A Conservative Case for Anthropic

Hegseth was essentially ordering Anthropic to build a new capability—one that contravenes its most deeply held ethical principles, and lies outside the scope of its current contracts—and make that available to the U.S. military. If any party is radically revising the nature of defense contracting, it is the U.S. government.

Since the simmering dispute between Anthropic and the Pentagon boiled over last month, many on the left and among the more libertarian, tech-friendly corners of the right have leapt to the AI lab’s defense. Many conservatives, however, have shrugged it off (perhaps understandably, given that less than 12 hours later we launched a war against Iran). Others have defended the Administration’s decision to cancel Anthropic’s contracts and brand it a supply chain risk, apparently convinced by Secretary of War Pete Hegseth’s argument that constraints on the use of the company’s Claude AI model for autonomous weapon systems and mass surveillance constitute an attempt to “seize veto power over the operational decisions of the United States military.”

A recent piece in The National Interest, for example, echoed the idea that Anthropic was usurping a decision that properly belonged to the military, dismissing the company’s professed concerns as “tertiary to the real debate: who has control over the weaponry American soldiers use.” And the right-leaning Compact magazine ran an article framing Anthropic’s moves as part of a broader effort by Silicon Valley to effect a “redefinition of sovereignty” that would move the locus of political power from democratically accountable governments to private tech firms.

But these kinds of arguments make a crucial error. Their authors, and the Pentagon, assume the existence of a Claude—let’s call him WarClaude—with two switches. One is labelled “fully autonomous weapon systems,” and the other “mass surveillance of Americans.” And before WarClaude leaves the figurative AI factory, Anthropic’s engineers flip those two switches off. All the government is asking for, in this telling, is to keep them in the ‘on’ position.

If that were the situation, Anthropic could plausibly be accused of seeking “veto power” over the U.S. military, especially if there were no provisions in its contracts about killer robots or building an AI panopticon.

But the reality is quite different. Some of these constraints are indeed overlaid on Claude’s base system, but important parts are also baked into the models themselves. Which is to say: Anthropic simply does not make a WarClaude. It could, presumably, but right now the Pentagon is asking for a product that does not exist. The two use cases at issue have never been covered by Anthropic’s contracts, and the company does not provide models capable of executing them to any of its customers. Hegseth was essentially ordering Anthropic to build a new capability—one that contravenes its most deeply held ethical principles, and lies outside the scope of its current contracts—and make that available to the U.S. military. If any party is radically revising the nature of defense contracting, it is the U.S. government.

Obviously, if the government doesn’t want to buy Claude, it doesn’t have to. Cancelling the contracts and barring future deals is eminently fair. But the White House is going further than that. And given what is actually being asked of Anthropic, conservatives should sympathize with the company—and object to the wielding of government power in such unprecedented and legally dubious ways.

Consider that no American firm has ever been declared a supply chain risk before (let alone one whose products are apparently being used with considerable success in ongoing military operations). If this designation is allowed to stand, it would effectively force any business that works with the War Department to end its commercial relationship with Anthropic. This would cripple one of the world’s most important frontier labs—one former White House AI advisor referred to it, accurately, as “corporate murder”—and set back American innovation, while also and send a chilling message to the rest of the industry. It would mirror the worst excesses of the Biden Administration, and open the way for future left-wing administrations to use these same powers to kill conservative companies. For Republicans who purport to care about economic freedom, reining in government power and protecting rights of conscience, this should be a five-alarm fire.

But it’s not just devotees of small government and free markets who should be concerned. For years, conservatives of a more populist bent have criticized Big Tech for putting profits over principles. From Meta’s work with the Chinese Communist Party to Google’s attempts to strangle potential competitors in the cradle, tech firms have engaged in unscrupulous and unethical conduct in pursuit of a quick buck. But in the case of Anthropic, we have a rare and refreshing case of the inverse: a company risking its business, and perhaps its own survival, rather than acquiesce in something it considers to be profoundly wrong. If conservatives are serious about wanting corporations to consider virtue and the common good rather than only maximizing shareholder value, throttling Anthropic is precisely the wrong way to proceed.

Nor is the interest purely abstract or procedural. Anthropic’s CEO Dario Amodei is unique among his confreres in the thoughtfulness and moral seriousness with which he considers the most powerful technology since the atom bomb. At bottom, the values driving his company’s behavior are (or should be) the same values animating conservatives’ engagement with AI: ensuring that life and death decisions are made with human beings in the loop; protecting Americans’ civil liberties and constitutional rights; and ensuring that it is man, in the final instance, who shapes and controls his technology—not the other way around.

It remains unclear how the dispute will play out. Talks between Anthropic and the War Department reportedly restarted this week, although the government later formally provided notice of the supply chain risk designation. Anthropic has said it will challenge the decision in court, and appears to have a very good chance of success. And similar issues have clouded  attempts by other AI companies to steal a march on their rival by inking their own military deals.

Whatever happens, the sheer potency of AI, together with the astonishing speed with which it is being developed and deployed, will continue to raise profound and difficult questions. And conservatives are far from united in their answers. But at the very least, we should agree that companies should not be coerced into building AI tools they believe are dangerous and unethical.

This article was submitted as a pseudonymous rebuttal to a piece authored by William Thibeau titled, "The Engineers' Veto"

ABOUT

Welcome to American Intelligence, America's digital agora. A special place where the realists, iconoclasts, and visionaries meet to answer one deceptively simple question: How do we control AI?