Anthropic digs in heels in dispute with Pentagon, source says - Reuters
Anthropic digs in heels in dispute with Pentagon, source says Reuters
Coverage by Political Leaning
See how different sides of the spectrum reported this story
Key People
No people linked to this story
Locations
No locations linked to this story
Tags
No tags linked to this story
All Coverage
<a href="https://news.google.com/rss/articles/CBMinAFBVV95cUxPUURyZFJrZGhCVEhLQUtoVmZ6MHhfbVZuNUV3SEdobmUwTk94bHg4ajRiRHBmQzVtRVkxMk8tSDBoSkVJVVZCZXBoSF9zRkkwdVRwcHhsbm9IUmV4SXAyQ0o4cHpDTnIwaDhlbDlORnE5TlJXQjBnT2Q5Si1tbjNaOXlOSC04MGw5QnE4QTJHQ3BuUll4ODR2TURwZzY?oc=5" target="_blank">Anthropic digs in heels in dispute with Pentagon, source says</a> <font color="#6f6f6f">Reuters</font>
Defense Secretary Pete Hegseth has given Anthropic an ultimatum to allow the U.S. military unrestricted use of its AI technology, including its Claude chatbot, or risk forfeiting its government contract. Anthropic CEO Dario Amodei, citing ethical concerns, opposes the use of AI for fully autonomous weapons and domestic surveillance. The Pentagon, however, argues that lawful military operations require adaptable AI tools without built-in limitations.
U.S. Defense Secretary Pete Hegseth has given Anthropic CEO Dario Amodei a deadline until Friday to provide the military full access to its AI model, Claude, or face severe penalties. In a tense meeting, Hegseth warned that the Pentagon could either sever ties and label Anthropic a 'supply chain risk' or use the Defense Production Act to compel cooperation. The Pentagon views Anthropic's safeguards around Claude as a growing obstacle but remains concerned about losing access to the highly advanced model, which is currently integral to classified defense systems.
Defense Secretary Pete Hegseth has scheduled a critical meeting with Anthropic CEO Dario Amodei amid escalating tensions over the use of the AI model Claude in classified military systems. Claude is currently the only AI tool deeply integrated into U.S. defense and intelligence operations, valued for its advanced capabilities. However, Anthropic is resisting Pentagon pressure to lift safeguards entirely, particularly in two areas: mass surveillance of Americans and developing autonomous weaponry without human oversight.
Elon Musk's AI company, xAI, has secured a deal with the Pentagon to deploy its Grok model in classified military systems, marking a significant development in defense technology. Until now, Anthropic's Claude was the only AI model used in such high-security applications, including intelligence analysis, weapons development, and battlefield coordination. However, tensions between the Pentagon and Anthropic over security safeguards have prompted the Department of Defense to explore alternatives.
Anthropic has until Friday evening to either give the U.S. military unrestricted access to its AI model or face the consequences, reports Axios. Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei in a meeting Tuesday morning that the Pentagon will either declare Anthropic a 'supply chain risk' — a designation usually reserved for foreign adversaries — or invoke the Defense Production Act (DPA) to force the company to tailor a version of the model to the military’s needs. The DPA gives the president the authority to force companies to prioritize or expand production for national defense. It was recently invoked during the COVID-19 pandemic to compel companies like General Motors and 3M to produce ventilators and masks, respectively. Anthropic has long stated that it doesn’t want its technology used for mass surveillance of Americans or for fully autonomous weapons — and is refusing to compromise on these points.
Defense Secretary Pete Hegseth has threatened Anthropic that it could invoke powers that would allow the government to force the artificial intelligence firm to share its novel technology in the name of national security if it does not agree by Friday to terms favorable to the military, people familiar with the ongoing discussions said. But Anthropic is prepared to walk away from negotiations — and its $200 million contract with the Defense Department — if concerns over the use of its technology for autonomous weapons or mass surveillance are not addressed, according to the people familiar with the discussions. Anthropic is the first firm to integrate its technology into the Pentagon’s classified networks, and the firm has aggressively positioned itself to be a key player in national security. In a meeting with Hegseth on Tuesday, Dario Amodei, the company’s co-founder and chief executive, held firm that its AI model Claude should not be used to power autonomous weapons or conduct mass surveillance of Americans, said the people familiar with the discussions.
In a statement to The Post, Anthropic said it is “committed to using frontier AI in support of U.S. national security.” “Claude is used for a wide variety of intelligence-related use cases across the government, including the [Defense Department], in line with our Usage Policy,” Anthropic said. “We are having productive conversations, in good faith, with [the Defense Department] on how to continue that work and get these complex issues right.” Until recent weeks, Anthropic had been in an enviable position, with a $200 million contract and its technology uniquely approved for use within the Pentagon’s classified networks. That quickly began to change, Trump administration officials say, following Anthropic’s response to its recent use by the Pentagon in the Maduro operation. Anthropic’s Claude and technology developed by defense firm Palantir were used in preparation for the Jan. 3 raid, according to a person familiar with the assault who spoke on the condition of anonymity to share confidential details about the operation. During the raid, scores of Maduro’s security guards and Venezuelan service members were killed. After the attack, a senior defense official said, an executive from Anthropic discussed the raid with an executive at Palantir, asking whether Anthropic’s tools had been used. The Palantir executive relayed the question to the Defense Department, saying it implied that Anthropic might have disapproved of how Claude had been used, the official said. That prompted department leaders to call into doubt whether the company could be fully relied on. “They expressed concern over the Maduro raid, which is a huge problem for the department,” one administration official said.
Similar Stories
Related coverage based on topic and tags
Pentagon Anthropic feud has sales and AI warfare at stake as Friday deadline looms - Reuters
Pentagon Anthropic feud has sales and AI warfare at stake as Friday deadline looms Reuters
February 27, 2026 at 11:04 AMTrump directs US agencies to toss Anthropic's AI as Pentagon calls startup a supply risk - Reuters
Trump directs US agencies to toss Anthropic's AI as Pentagon calls startup a supply risk Reuters
February 27, 2026 at 08:56 PMAnthropic boss rejects Pentagon demand to drop AI safeguards
Defense Secretary Pete Hegseth previously threatened to remove the firm from the department's supply chain.
February 27, 2026 at 03:54 AMExclusive: Trump eyes Pentagon AI program for trade block's minerals pricing, sources say - Reuters
Exclusive: Trump eyes Pentagon AI program for trade block's minerals pricing, sources say Reuters
February 24, 2026 at 01:20 PMUS threatens Anthropic with deadline in dispute on AI safeguards
The AI developer laid out red lines on military use of its products, a source said.
February 24, 2026 at 10:58 PMTrump Iranian missile claim unsupported by U.S. intelligence, say sources - Reuters
Trump Iranian missile claim unsupported by U.S. intelligence, say sources Reuters
February 27, 2026 at 04:23 AM