In legal filings, Anthropic said the administration had overstepped its legal authority and violated the company’s First Amendment rights to speak about the limits of AI’s military applications. The company filed two cases to challenge two different laws the government is using to declare it a risk.
“Anthropic was founded based on the belief that AI technologies should be developed and used in a way that maximises positive outcomes for humanity, and its primary animating principle is that the most capable artificial-intelligence systems should also be the safest and the most responsible,” the company’s attorneys wrote in a complaint filed in US District Court for the Northern District of California.
“Anthropic brings this suit because the federal government has retaliated against it for expressing that principle.”
Mark Jia, a law professor at Georgetown University, said Anthropic had a strong chance of success in court because the law the Defence Department is relying on was designed to target companies linked to foreign adversaries.
“It is absurd for the government to argue that Anthropic is the kind of company meant to be addressed by this statute, when the War Department [Department of Defence] has repeatedly sought to obtain Anthropic’s services for national defence,” Jia said in an email, referring to the department by the Trump administration’s preferred name.
The battle has reverberated through Silicon Valley, raising questions about what limits AI developers should be able to impose on their technology when they do business with the government. Administration officials and the Defence Department have demanded the freedom to use AI systems for any lawful purpose, arguing that the government must have the final say.
After Anthropic CEO Dario Amodei refused to agree, Trump said last month that he was ordering federal agencies to stop using Claude. Defence Secretary Pete Hegseth went further, saying he was imposing a far-reaching ban on the company doing any work with military contractors.
But behind the scenes, the two sides continued to talk last week. Technology and defense figures lobbied the two sides to de-escalate, warning of the ripple effects that would come with branding a leading American company a security risk in an industry where AI labs, tech giants and hardware makers are intertwined with both one another and the Pentagon.
The discussions finally came to an end on Friday, according to a defence official - a day after tech news site the Information published a caustic internal staff memo in which Amodei said the administration was opposed to the company “because we haven’t given dictator-style praise to Trump”. The leak of the note contributed to the ultimate breakdown of the talks, according to the defence official and a second person familiar with the discussions.
“It blew up negotiations,” said the second person, speaking on the condition of anonymity to describe private talks. Amodei apologized for the memo in a statement.
Anthropic said the government’s actions had immediate consequences for the company, alleging that it placed hundreds of millions of dollars in jeopardy. Some of Anthropic’s partners that are also federal contractors have questioned whether they can continue to do business with the company, according to the company’s complaint.
The case has also rippled through the industry in other ways. On Tuesday, more than three dozen engineers, scientists and researchers working at rival AI companies Google and OpenAI issued a legal briefing in support of Anthropic.
The employees, who are building Google’s Gemini and OpenAI’s ChatGPT software and who signed the briefing in their personal capacities, argued the Pentagon’s decision created unpredictability across the industry, undermined American competitiveness and stifled legitimate and important debates about AI’s deployment. The signatories included Dean, Google’s chief scientist and a long-time leader of the company’s work on AI. (The Washington Post has a content partnership with OpenAI.)
“By silencing one lab, the government reduces the industry’s potential to innovate solutions,” the briefing said.
The employees noted they represent the spectrum of political viewpoints but were united in the belief there must be guardrails around the use of AI to create autonomous weapons and mass surveillance.
“The mere existence of such a capability in government hands - even if never activated against a specific individual - changes the character of public life in a democracy,” they said. The briefing was filed by the AI for Democracy Action Lab at the Protect Democracy Project, a non-profit that aims to combat the rise of autocracy in American institutions.
For now, the military is continuing to rely on Claude to help carry out the assault on Iran. The AI tool is embedded in the military’s Maven Smart System, which helps commanders analyse intelligence and identify targets to strike. In the lead-up to the military campaign, the system suggested hundreds of targets, with precise coordinates, and ranked them in order of importance, people familiar with the system previously told the Washington Post. It also speeds up planning dramatically and helps evaluate the aftermath of strikes, one of the people said.
Defence officials have said they are aware of their dependence on the system, and Trump said he was providing a six-month phase-out of Anthropic’s tools.
In the long term, competitors are positioned to supplant Anthropic, even if the company is victorious in court. As officials were labelling the company a pariah, its chief rival, OpenAI, was finalising an agreement to work on the Pentagon’s secret networks. OpenAI said it had been able to secure protections related to surveillance and autonomous weapons, while agreeing to the “all lawful uses” standard that officials wanted.
Aaron Schaffer contributed to this report.
Sign up to Herald Premium Editor’s Picks, delivered straight to your inbox every Friday. Editor-in-Chief Murray Kirkness picks the week’s best features, interviews and investigations. Sign up for Herald Premium here.