Anthropic, the company behind the popular AI tool Claude, is suing the Trump administration after being designated a “supply chain risk” by the Pentagon. The move effectively bars defense contractors from using Claude in their work with the U.S. Military.
The lawsuit, filed Monday in the U.S. District Court for the Northern District of California, alleges the actions are “unprecedented and unlawful” and are causing irreparable harm to the company. The dispute highlights the increasing tension between the U.S. Government and AI developers over control of the technology’s utilize, particularly in sensitive national security applications.
“The Constitution does not permit the government to wield its vast power to punish a company for exercising its constitutionally protected speech,” Anthropic stated in the complaint. The company argues that no law authorizes the government to take such actions.
The conflict began when Anthropic refused to remove certain restrictions from its contract with the Department of Defense, including limitations on the use of its AI in autonomous weapon systems or for mass surveillance of citizens.
According to the lawsuit, Defense Secretary Pete Hegseth demanded the removal of these safeguards to allow the military unrestricted access to the technology. When Anthropic declined, the Pentagon designated the company as a supply chain risk, effectively preventing its tools from being used within U.S. Government institutions.
Trump’s Intervention
President Donald Trump subsequently intervened, publicly criticizing Anthropic and directing federal agencies to cease using its technologies.
White House spokesperson Liz Huston described Anthropic as a “radically leftist and woke company” that was attempting to dictate terms to the U.S. Military.
“Our military will be governed by the Constitution of the United States, not by the conditions of some woke AI company,” she stated.
Anthropic contends that the government’s response was disproportionate and has damaged its reputation and business relationships. The company estimates that the government’s actions jeopardize hundreds of millions of dollars in contracts.
“Current and future contracts with private partners are now in doubt,” the company stated in its filing. “In addition to those immediate economic harms, Anthropic’s reputation and core First Amendment freedoms are under attack.”
Claude is among the most widely used AI systems globally, utilized by major tech firms like Google, Meta, Amazon and Microsoft, all of which collaborate with the U.S. Government. These companies have indicated they will continue to use Claude for non-defense related projects.
The dispute has sparked reactions within the tech community, with nearly 40 employees from Google and OpenAI filing a document with the court in support of Anthropic.
These employees argue that current AI systems pose real risks if used for mass surveillance or autonomous weapons systems without human oversight. “These technologies require certain safety rules and limitations,” they stated.
Case Could Reach Supreme Court
Anthropic is not seeking financial compensation in the lawsuit, but rather a ruling that Trump exceeded his authority and violated the Constitution.
According to University of Richmond School of Law professor Carl Tobias, the case could ultimately reach the Supreme Court. “Anthropic may succeed in federal court, but the government will likely appeal,” Tobias said.
OpenAI, the company behind ChatGPT, seized on the situation, signing a contract with the Pentagon to deploy its models in military systems just hours after Trump ordered federal agencies to halt use of Anthropic’s technology.
