Anthropic was the Pentagon’s choice for AI. Now it’s banned and experts are worried

0
7


Dario Amodei, chief executive officer of Anthropic, at the AI Impact Summit in New Delhi, India, on Thursday, Feb. 19, 2026.

Ruhani Kaur | Bloomberg | Getty Images

Last August, Pentagon technology chief Emil Michael, a former Uber executive and attorney, took on the added role of overseeing the Defense Department’s artificial intelligence portfolio. A month earlier, Anthropic had been awarded a $200 million DOD contract that expanded its work with the agency.

“I said, ‘I just want to see the contracts,'” Michael told the All-In Podcast on Friday, reflecting on his early days managing the AI portfolio. “You know, the old lawyer in me.”

Michael’s request kicked off a months-long review process that culminated in the Defense Department banning Anthropic’s technology, leaving the military without its hand-picked AI models to operate in the most sensitive environments. In an extraordinary move, the DOD designated Anthropic a supply chain risk, a label that’s historically only been applied to foreign adversaries. It will require defense vendors and contractors to certify that they don’t use the company’s models in their work with the Pentagon.

Anthropic sued the Trump administration on Monday, calling the government’s actions “unprecedented and unlawful,” and claiming that they are “harming Anthropic irreparably,” putting hundreds of millions of dollars worth of contracts in jeopardy.

The DOD’s sudden reversal came as a shock to many officials in Washington who viewed Anthropic’s models as superior — they were the first to be deployed in the agency’s classified networks — and championed the company’s ability to integrate with existing defense contractors like Palantir. The decision was all the more puzzling since the Trump administration had threatened during negotiations to invoke the Defense Production Act, which could have forced Anthropic to grant the military access to its technology.

“I don’t know how those two things can both be true in reality,” said Mark Dalton, a retired Navy rear admiral who now leads technology and cybersecurity policy at R Street, a think tank in Washington, D.C. “Something is so necessary that you need to invoke DPA and so harmful that you put a designation on it that’s reserved for foreign adversaries.”

Defense experts like Dalton expressed concern about the government’s decision. Not only does it set a troubling precedent, they argue, but it also means the administration is banishing a key technology vendor that’s been lauded for its diligence with respect to AI safety, tough rhetoric against China and its entrepreneurial chops, becoming one of the fastest-growing tech startups in the U.S. 

Former DOD official Brad Carson, who’s now co-founder and president of AI policy nonprofit Americans for Responsible Innovation, said the move is particularly troubling for military personnel, who have come to rely on Claude. An ex-Navy intelligence officer who served in Iraq, Carson said he’s talked to a number of retired officers who told him that “warfighters are not happy about it.”

“You’re not so excited if you’re in the military,” said Carson, who worked in President Obama’s Defense Department until 2016 and before that was deployed to Iraq while in the Army and also served two terms in Congress as a Democrat in Oklahoma. “They view Claude as being a better product, the most reliable, with the most user friendly outputs they can assimilate into planning.”

CNBC spoke to 17 AI policy experts, former Palantir and Anthropic employees, tech analysts and researchers about Anthropic’s critical role in the Defense Department and what comes next. Several of the people asked not to be named because they weren’t authorized to speak on the matter.

Anthropic CEO Dario Amodei founded the San Francisco-based company in 2021 alongside his sister, Daniela Amodei, and a handful of other researchers. The group had defected from OpenAI, before the launch of ChatGPT, over concerns about the company’s direction and attitude toward safety. They spent years carefully constructing Anthropic’s reputation as a firm that was more dedicated to responsible AI deployment. 

Anthropic launched its family of AI models, known as Claude, in March 2023, a few months ChatGPT hit the market and quickly went viral. In the three years since introducing Claude, Anthropic has raised billions of dollars of capital, en route to a $380 billion valuation. 

The company is now under immense pressure to justify that pricetag and has been forced to rapidly commercialize its technology in an effort to keep pace with OpenAI and other rivals like Google.

While OpenAI was enthralling consumers, Anhtropic found quick success selling to large enterprises, including the DOD. It’s an area Amodei started focusing on early, recognizing the business and societal importance of working closely with the government and military and helping to establish principals for safe uses of a technology that has the power to bring about potential catastrophes, according to people with knowledge of the matter.

The company began building relationships and making inroads with officials in Washington, D.C., and Amodei was among the few AI industry executives invited to meet with then-Vice President Kamala Harris, in May 2023. 

WATCH: Why the U.S. Defense Department blacklist of Anthropic is so unprecedented

Why the U.S. Defense Department blacklist of Anthropic is so unprecedented
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.


LEAVE A REPLY

Please enter your comment!
Please enter your name here