A simmering dispute between the United States Department of Defense (DOD) and Anthropic has now escalated into a full-blown confrontation, raising an uncomfortable but important question: who gets to set the guardrails for military use of artificial intelligence — the executive branch, private companies or Congress and the broader democratic process?
The conflict began when Defense Secretary Pete Hegseth reportedly gave Anthropic CEO Dario Amodei a deadline to allow the DOD unrestricted use of its AI systems. When the company refused, the administration moved to designate Anthropic a supply chain risk and ordered federal agencies to phase out its technology, dramatically escalating the standoff.
Anthropic has refused to cross two lines: allowing its models to be used for domestic surveillance of United States citizens and enabling fully autonomous military targeting. Hegseth has objected to what he has described as “ideological constraints” embedded in commercial AI systems, arguing that determining lawful military use should be the government’s responsibility — not the vendor’s. As he put it in a speech at Elon Musk’s SpaceX last month, “We will not employ AI models that won’t allow you to fight wars.”
Stripped of rhetoric, this dispute resembles something relatively straightforward: a procurement disagreement.
Procurement policies
In a market economy, the U.S. military decides what products and services it wants to buy. Companies decide what they are willing to sell and under what conditions. Neither side is inherently right or wrong for taking a position. If a product does not meet operational needs, the government can purchase from another vendor. If a company believes certain uses of its technology are unsafe, premature or inconsistent with its values or risk tolerance, it can decline to provide them. For example, a coalition of companies have signed an open letter pledging not to weaponize general-purpose robots. That basic symmetry is a feature of the free market.
Where the situation becomes more complicated — and more troubling — is in the decision to designate Anthropic a “supply chain risk.” That tool exists to address genuine national security vulnerabilities, such as foreign adversaries. It is not intended to blacklist an American company for rejecting the government’s preferred contractual terms.
Using this authority in that manner marks a significant shift — from a procurement disagreement to the use of coercive leverage. Hegseth has declared that “effective immediately, no contractor, supplier, or partner that does business with the U.S. military may conduct any commercial activity with Anthropic.” This action will almost certainly face legal challenges, but it raises the stakes well beyond the loss of a single DOD contract.
AI governance
It is also important to distinguish between the two substantive issues Anthropic has reportedly raised.
The first, opposition to domestic surveillance of U.S….
Read full article: Military AI Governance: Who Sets the Rules?
The post “Military AI Governance: Who Sets the Rules?” by Daniel Castro was published on 03/08/2026 by spectrum.ieee.org


































Leave a Reply