Home » Latest News » News » Trump Bans Anthropic From Defense Contracts Over AI Restrictions

Trump Bans Anthropic From Defense Contracts Over AI Restrictions

by Emily Johnson - News Editor
0 comments

CNN  — 

The Trump administration has ordered federal agencies and contractors working with the military to cease doing business with Anthropic, a leading artificial intelligence company. The move comes after Anthropic refused to allow the Pentagon unrestricted access to its AI technology.

President Donald Trump announced on Truth Social Friday afternoon that agencies would have six months to phase out their use of Anthropic products. Later, Defense Secretary Pete Hegseth said on X that Anthropic would be considered a “risk to the supply chain,” a designation typically reserved for companies linked to foreign adversaries.

The escalating dispute between the government and a key player in the rapidly developing AI sector could set a precedent for how this technology is utilized.

At the heart of the conflict is Anthropic’s reluctance to grant the Pentagon unfettered access to its popular AI model, Claude.

The Pentagon, which currently uses Claude on its classified networks, wants the ability to utilize the system for “all legal purposes.” However, Anthropic has established two firm boundaries: Claude should not be used in the development of autonomous weapons and should not be employed for mass surveillance of U.S. Citizens.

Pentagon officials maintain they have no intention of using the AI for those purposes and require flexibility in utilizing the technology they are licensing.

The standoff reached a critical point Tuesday during a high-level meeting at the Pentagon between Hegseth and Anthropic CEO Dario Amodei. Whereas a source familiar with the matter described the meeting as cordial, Trump’s comments Friday suggest a shift in the situation.

Anthropic signaled its unwillingness to concede to the Pentagon’s demands Thursday.

“Threats will not change our position: we cannot, in solid conscience, accede to your request,” Amodei said in a statement.

Emil Michael, the Pentagon’s Under Secretary for Research and Engineering, told Bloomberg that they were “in the final stages” of reaching an agreement with Anthropic that would have “substantially accepted what they wanted” before the company’s Thursday statement.

“What we have is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially endangering our warfighters,” Pentagon spokesperson Sean Parnell wrote on X. “We will not allow ANY company to dictate the terms on how we make operational decisions.”

Trump stated on Truth Social Friday that Anthropic had made a “disastrous mistake” and accused the company of attempting to dictate how the Armed Forces operate. Shortly after Trump’s post, the General Services Administration announced it would remove Anthropic from USAi.gov, the federal government’s centralized testing ground for AI tools.

“No contractor, vendor, or partner doing business with the U.S. Armed Forces” will be permitted to do business with Anthropic, Hegseth said Friday.

The AI industry largely came to Anthropic’s defense this week, with OpenAI CEO Sam Altman stating he shares Anthropic’s concerns regarding working with the Pentagon.

Anthropic and OpenAI did not immediately respond to CNN’s requests for comment.

Anthropic’s Claude was the first AI model to operate on the military’s classified networks. The company signed a contract worth up to $200 million with the Pentagon last summer. Other major AI companies, such as OpenAI, have only signed agreements with the Pentagon on its unclassified networks.

Anthropic’s “acceptable use policy,” included in the contract, prohibits the use of Claude in mass surveillance and autonomous weapons.

“This dispute comes at a sensitive time because, on the one hand, the user base within the Department of Defense loves Anthropic, loves Claude, and says that its use restrictions, at least in the conversations I’ve had, have never been triggered,” said Gregory Allen, a senior fellow at the Center for Strategic and International Studies, on Bloomberg Radio.

However, the Pentagon does not want to be limited by a company’s policies. A Pentagon official told CNN: “You can’t lead tactical operations by exception” and “legality is the responsibility of the Pentagon as the finish user.”

From the Pentagon’s perspective, they do not want to be in a national security situation, having to ask permission from a company, and having restrictions lifted.

Cutting ties with Anthropic could pose a challenge for the Pentagon if it needs to replace any internal systems utilizing Claude. Though a Pentagon official said Elon Musk’s Grok AI system “is willing to be used in a classified environment,” Grok is not considered as advanced as Claude.

Losing a $200 million contract would not pose an existential threat to Anthropic, which was recently valued at around $380 billion. The greater risk is being considered a supply chain risk, meaning any company working with the U.S. Armed Forces would need to demonstrate it has no affiliation with Anthropic in its function with the Pentagon.

Much of Anthropic’s success comes from its corporate contracts with large companies, many of which may have contracts with the Pentagon.

“So a large portion of Anthropic’s current customer base could disappear, either because they have government contracts or might want them in the future,” said Adam Connor, vice president of technology policy at the Center for American Progress, a Washington-based think tank.

Jensen Huang, CEO of major AI chipmaker Nvidia, said that while he expects the Pentagon and Anthropic to reach an agreement, “if it doesn’t get resolved, it’s not the end of the world,” as Notice other AI companies the Pentagon can work with and Anthropic has other customers.

Earlier this week, the Pentagon said it would also consider forcing Anthropic to work with them through the Defense Production Act, a 1950s law that “gives the president emergency authority to control national industries,” according to the Council on Foreign Relations. It is unclear whether the Pentagon could both compel Anthropic to work with them through the DPA and consider them a supply chain risk, or how.

Connor said Anthropic is not the only company under threat. The Pentagon’s move is a signal to other AI companies seeking to win contracts selling their services to the government.

“I think, in a broader sense, this sends a message to the other AI companies they’re negotiating with to make sure they don’t try to impose any kind of restrictions on the uses of AI,” Connor said.

If the Pentagon were simply dissatisfied with Anthropic’s conditions for its model, it could terminate the contract and obtain the AI model it wants from another company, said Alan Rozenshtein, a law professor at the University of Minnesota.

“What the government really wants is to continue using Anthropic’s technology, and it’s using every pressure point possible,” he said.

It remains unclear how the Armed Forces would replace Anthropic’s systems, or if the government plans to seize further action at this time.

“Taking out a national champion in AI at a time when the White House says the AI race with China is equivalent to the space race during the Cold War with the Soviet Union, you don’t want to take one of the crown jewels of your industry and light it on fire for something like this,” Allen said on Bloomberg.

“There’s a better way to resolve this dispute than the absolutist posture the government has taken.”

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy