A federal judge in the United States has temporarily halted the Pentagon’s blacklisting of Anthropic, marking the latest development in the company’s contentious battle with the military over the safety of artificial intelligence (AI) on the battlefield. Anthropic, in its lawsuit filed in a California federal court, contends that U.S. Secretary of War Pete Hegseth exceeded his authority by designating the company as a national security supply-chain risk. This designation allows the government to flag companies that could potentially expose military systems to infiltration or sabotage by adversaries.
The lawsuit alleges that the government infringed on Anthropic’s right to free speech under the First Amendment by retaliating against its stance on AI safety. The company asserts that it was not afforded an opportunity to challenge the designation, violating its Fifth Amendment right to due process. U.S. District Judge Rita Lin, appointed by former President Joe Biden, concurred with Anthropic in a 43-page ruling. However, the ruling will not take immediate effect, as the administration has seven days to appeal.
The dispute arose following Anthropic’s opposition to the military’s use of its AI chatbot Claude for U.S. surveillance or autonomous weaponry, prompting Hegseth’s unprecedented action to block Anthropic from certain military contracts. Anthropic executives anticipate significant financial losses and damage to their reputation as a result of this blockade.
Anthropic argues that AI models are not sufficiently reliable for use in autonomous weapons and opposes domestic surveillance as a violation of rights. While the Pentagon asserts that private companies should not impede military operations, it clarified that it has no interest in utilizing the technology in unauthorized ways.
Judge Lin’s ruling on Thursday suggested that the government’s actions were aimed at penalizing Anthropic rather than safeguarding national security interests, stating that “punishing Anthropic for criticizing the government’s contracting position is classic illegal First Amendment retaliation.” Anthropic’s spokesperson, Danielle Cohen, expressed satisfaction with the decision and emphasized the company’s commitment to collaborating with the government for the benefit of all Americans through safe and dependable AI technologies.
The designation of Anthropic as a supply-chain risk under a government-procurement statute is unprecedented for a U.S. company. Anthropic’s lawsuit challenges the legality and basis of this decision, questioning its alignment with the military’s previous positive assessments of Claude. The Justice Department countered by highlighting the potential operational risks posed by Anthropic’s refusal to comply with contractual terms, emphasizing the importance of ensuring the functionality of military systems during operations.
Apart from the ongoing lawsuit in California, Anthropic is also embroiled in a separate legal battle in Washington over another Pentagon supply-chain risk designation that could result in its exclusion from civilian government contracts.
