A federal judge has temporarily stopped the Pentagon and the Trump administration from blacklisting Anthropic, giving the AI company a key legal win as tensions rise over how its technology should be used in defense operations. The ruling blocks a directive that labeled Anthropic as a “supply chain risk” and ordered federal agencies to cut ties with its Claude models.
The Wall Street Journal reported details of the ruling, highlighting how the court pushed back against the government’s actions and questioned the legal basis behind them.
Judge Calls Out Government Action
Judge Rita F. Lin of the Northern District of California ruled in favor of Anthropic, stating that the Defense Department’s move appeared retaliatory and violated free speech protections. She also ordered the administration to reverse its designation and halt the phase-out of Anthropic’s tools across federal agencies.
“It looks like an attempt to cripple Anthropic. The government’s approach reflects classic illegal First Amendment retaliation, with no statutory basis for branding a U.S. company as a threat for expressing disagreement,” said Judge Rita F. Lin.
Lin’s use of “illegal First Amendment retaliation” signals how seriously the court views the case, especially since the “supply chain risk” label usually applies to foreign threats, not domestic companies.
Dispute Over Military AI Use
The conflict began after Anthropic walked away from a $200 million Pentagon contract, where the company pushed for strict limits on how its AI models could be used. It refused to allow deployment in fully autonomous weapons or large-scale surveillance systems, which triggered a strong response from defense officials.
Secretary of Defense Pete Hegseth labeled Anthropic a risk to national security, while President Trump escalated the situation by calling the company “radical left” and ordering agencies to stop using its tools. Soon after, Anthropic filed a lawsuit, arguing that the government acted in a punitive and retaliatory manner.
The injunction gives Anthropic time to continue operations while the legal process unfolds, but the administration has seven days to appeal the decision. The broader case will now decide whether the government can impose such restrictions on private AI companies over policy disagreements.
At the same time, competitors have moved quickly, with OpenAI securing a new Pentagon contract shortly after Anthropic stepped back. Still, Anthropic remains involved in defense-linked systems through partners, which keeps it relevant even as the legal battle continues.