US Defence Secretary Pete Hegseth threatened Anthropic with the Defence Production Act if it doesn't agree to military terms by Friday local time. Photo / Getty Images
US Defence Secretary Pete Hegseth threatened Anthropic with the Defence Production Act if it doesn't agree to military terms by Friday local time. Photo / Getty Images
United States Defence Secretary Pete Hegseth has threatened Anthropic, in an escalating battle over the artificial intelligence firm’s novel technology, people familiar with the ongoing discussions said.
Hegseth said officials could invoke powers that would allow the US Government to force the firm to co-operate in the name ofnational security if it does not agree by the weekend to terms favourable to the military.
Anthropic is prepared to walk away from the negotiations – and its US$200 million ($334m) contract with the Defence Department – if concerns over the use of its technology for autonomous weapons or mass surveillance are not addressed, according to the people familiar with the discussions.
The Pentagon has argued that it is not proposing any use of Anthropic’s technology that is not lawful.
A senior defence official said in a statement to the Washington Post that if the company does not comply by 5.01pm on Friday (11am Saturday NZT), Hegseth “will ensure the Defence Production Act is invoked on Anthropic, compelling them to be used by the Pentagon regardless of if they want to or not”.
Anthropic may abandon its $334 million contract over concerns about its AI being used for autonomous weapons or mass surveillance. Photo / Getty Images
“This has nothing to do with mass surveillance and autonomous weapons being used,” the defence official said.
Anthropic is the first firm to integrate its technology into the Pentagon’s classified networks, and the firm has aggressively positioned itself to be a key player in national security.
In a meeting with Hegseth yesterday, Dario Amodei, the company’s co-founder and chief executive, held firm that its AI model Claude should not be used to power autonomous weapons or conduct mass surveillance of Americans, said the people familiar with the discussions.
Tensions have risen between the firm and the Pentagon in recent weeks over how Anthropic’s AI was applied during the raid to capture Venezuelan President Nicolas Maduro.
Defence officials responded swiftly, suggesting that if Anthropic did not allow the Pentagon to apply the AI as it wants to, within lawful limits, the company would be considered a supply-chain risk, costing it and any firm subcontracting its AI future business opportunities.
At the meeting, Hegseth went further, saying Anthropic could in addition be subject to the Defence Production Act – which enables the Government to gain control of firms and their products – in the name of national security. The DPA was used during the Covid-19 pandemic to address medical supply shortfalls.
However, experts on the DPA questioned whether it could be used to force Anthropic to drop the limitations it seeks to maintain on how its technology can be used.
“I’m not sure that’s how that part of DPA has tended to be used, or has ever been used,” said Jerry McGinn, a director at the Centre for Strategic and International Studies for industrial base issues.
Tensions rose after Anthropic's AI was used in a raid, with the Pentagon warning of supply chain risk if compliance isn't met. Photo / Getty Images
Overall, the meeting was serious but respectful, according to one of the people familiar with the discussions, with Hegseth praising Anthropic’s technology.
The secretary said he wanted to continue to work with the company, but he threatened to cancel its contract by the end of the week, said the person, who spoke on the condition of anonymity to describe a private meeting.
Amodei argued that neither of the limits he is seeking would impinge on the department’s work, the person said.
“During the conversation, Dario expressed appreciation for the Department’s work and thanked the Secretary for his service,” Anthropic said in a statement.
“We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”
The meeting comes after escalating criticism of Anthropic by Pentagon officials.
Hegseth and his team have insisted in recent weeks that the military have free rein to use AI tools as it sees fit, limited only by the law rather than guardrails set by the companies that make the systems.
Defence officials say other leading companies have agreed, at least for unclassified work, casting Anthropic as a holdout.
The senior defence official said that Elon Musk’s Grok, the chatbot developed by his AI company, xAI, also has been approved to be used in a classified setting and that other AI firms are close to allowing classified use.
Anthropic and Amodei are trying to walk a fine line, positioning themselves as more than willing to work with the Pentagon and describing AI as a vital technology to allow democratic countries to defend themselves.
But shortly after Hegseth set forth his views in an internal directive, Amodei published an essay warning of the dangers of fully autonomous weapons and mass surveillance tools.
He wrote that while democratic countries could be expected to have limits on the use of such systems, “some of these safeguards are already gradually eroding in some democracies”.
The Pentagon has sped up its efforts to integrate AI into its weapons systems, driven by competition with China – which is racing to acquire AI technology for its military – and new dangers such as super-fast hypersonic missiles that are difficult for humans to react to.
The conflicts in Ukraine and Gaza have provided a preview of the role AI could play in a future war, with the widespread use of cheap semiautonomous drones and tools that analyse vast amounts of information to identify targets to strike.
The US Air Force has tested an AI-piloted jet in recent years, finding that it can beat elite pilots by cutting tiny fractions of a second off turns and manoeuvres.
Fully autonomous weapons are probably still several years away, experts say. The Defence Department’s current policy requires any system to undergo levels of review and have safeguards to ensure that humans would retain the decision-making on use of force. The policy will be reviewed as needed, officials have said.
Modern military operations are complex, involving thousands of people making life-and-death decisions quickly, said Emelia Probasco, a senior fellow at Georgetown University’s Centre for Security and Emerging Technology.
Not surprisingly, those people make mistakes, Probasco said, and AI tools could manage campaigns in all sorts of ways short of pulling the trigger.
“Everyone is still trying to think what is the best way to use these systems to improve our decisions,” said Probasco, a former Navy officer. “Nobody’s really got the definitive answer yet.”
Sign up to Herald Premium Editor’s Picks, delivered straight to your inbox every Friday. Editor-in-Chief Murray Kirkness picks the week’s best features, interviews and investigations. Sign up for Herald Premium here.