OpenAI and Google employees have stepped in to support Anthropic after the U.S. Department of Defense labeled the AI company a supply chain risk. More than 30 researchers and engineers from OpenAI and Google DeepMind filed a legal brief backing Anthropic’s lawsuit, arguing that the government used its authority in a way that threatens the broader artificial intelligence industry and discourages responsible debate about how AI systems should be used.
The dispute began after Anthropic refused to allow the Pentagon to use its AI models for large-scale surveillance of Americans or for systems capable of firing weapons autonomously. Soon after that decision, the Defense Department applied the supply chain risk label to the company, a classification usually associated with foreign security threats. Anthropic responded by filing lawsuits against the Defense Department and other federal agencies.
Wired reported that employees from OpenAI and Google DeepMind signed an amicus brief supporting Anthropic’s legal fight. The filing argues that the Pentagon already had other options if it disagreed with Anthropic’s restrictions on how its technology could be used.
“The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry,” the brief states, signed by researchers including Google DeepMind chief scientist Jeff Dean.
Concerns about AI restrictions and government pressure
The filing stresses that AI developers often place strict technical and contractual limits on their systems to prevent misuse. Without clear public laws governing artificial intelligence, these internal safeguards remain one of the main protections against harmful applications.
The employees also warned that punishing companies for setting ethical limits on their technology risks damaging the United States’ leadership in artificial intelligence research. Several signatories previously published open letters urging the Defense Department to remove the supply chain risk label and asking technology leaders to support Anthropic’s stance on responsible AI use.