Pentagon’s AI Ban Temporarily Blocked by Judge: Anthropic Wins First Battle

20

A federal judge has temporarily halted the Pentagon’s blacklisting of AI firm Anthropic, marking a significant win for the company in its ongoing legal battle. The preliminary injunction, granted by Judge Rita F. Lin of the Northern District of California, reverses the government’s “supply chain risk” designation while the case proceeds. This move comes after weeks of escalating tension between Anthropic and the Department of Defense over acceptable AI usage.

The Core Dispute: Safety vs. Control

At the heart of the conflict is Anthropic’s refusal to allow its AI, Claude, to be used for lethal autonomous weapons or domestic mass surveillance. The Pentagon, under Secretary Pete Hegseth, pushed for contracts that included “any lawful use” language, essentially demanding unrestricted access. Anthropic resisted, leading to the punitive designation and threats that could cripple its business.

“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” Judge Lin wrote in her order.

The Pentagon’s decision to label Anthropic a supply chain risk – a designation usually reserved for foreign entities linked to adversaries – sparked bipartisan criticism. The move raised concerns about retaliation against companies dissenting from administration policy. The question is not whether the military can choose its AI vendors, but whether it overstepped legal boundaries in punishing dissent.

Financial Stakes and Contractor Confusion

Anthropic claims the designation has already caused widespread confusion among partners, with dozens seeking clarification on their ability to continue working with the company. Court filings suggest potential revenue losses ranging from hundreds of millions to billions of dollars. The government’s own statements in court have further muddied the waters.

During a hearing, Judge Lin pressed officials on whether contractors would be terminated for using Anthropic’s technology even for unrelated work, such as supplying toilet paper to the military. The Department of Defense representative struggled to provide clear answers, raising doubts about the scope of the ban.

Pentagon’s Contradictory Messaging

The situation was further complicated by Secretary Hegseth’s public posts on X (formerly Twitter), which initially appeared to ban all commercial activity with Anthropic. The Pentagon later downplayed the severity of the statement, claiming it wasn’t “really meant” as a blanket prohibition. Judge Lin pointedly questioned this contradictory messaging during the hearing.

What Happens Next?

A final verdict is still weeks or months away. Anthropic maintains its focus on working with the government to ensure safe AI implementation, but the lawsuit highlights a broader debate about AI ethics, national security, and corporate speech. This case sets a precedent for how the US government navigates AI procurement in the future, particularly regarding companies that prioritize safety over unfettered access.

The legal battle underscores a critical tension: The military’s need for technological advantage versus the potential risks of unchecked AI deployment. The outcome will likely shape the relationship between government and AI developers for years to come.