Anthropic has pushed back against the Pentagon’s claim that it poses a national security risk, filing sworn declarations in a California federal court as part of its ongoing lawsuit against the Department of Defense. The company argues that the government relied on technical misunderstandings and raised concerns in court that never came up during the months of prior negotiations.
The case stems from a late February decision by President Trump and Defense Secretary Pete Hegseth to cut ties with Anthropic after the company refused to allow unrestricted military use of its AI systems. The dispute now heads toward a hearing before Judge Rita Lin in San Francisco on March 24.
According to Anthropic’s court filings, including declarations from Head of Policy Sarah Heck and Head of Public Sector Thiyagu Ramasamy, the timeline presented by the Pentagon does not match what happened during discussions between the two sides.
Anthropic challenges Pentagon claims
Heck said the government made a false claim that Anthropic wanted approval over military operations. She wrote, “At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role.”
She also stated that concerns about Anthropic interfering with military operations appeared for the first time in court filings, not during negotiations, which left the company without a chance to respond earlier.
Email raises questions about alignment
Heck pointed to a March 4 email from Pentagon official Emil Michael to CEO Dario Amodei, sent a day after the government finalized its supply chain risk designation. In that email, Michael said the two sides were “very close” on key issues related to autonomous weapons and surveillance.
That message contrasts with public statements made shortly after, where officials said there were no active talks and no chance of renewed negotiations.
Technical and security arguments
Ramasamy addressed claims that Anthropic could interfere with military systems. He explained that once the company’s AI runs in secure government systems, Anthropic cannot access or modify it, and any updates require Pentagon approval.
He also rejected concerns about foreign hires, noting that employees working on classified systems undergo U.S. government security clearance checks.
Anthropic argues that the designation marks retaliation for its stance on AI safety, while the government maintains it made a national security decision based on business considerations.