U.S. District Judge Rita Lin said Tuesday that the Pentagon decision to blacklist Anthropic’s Claude artificial intelligence models “looks like an attempt to cripple” the company.
Anthropic appeared in San Francisco federal court on Tuesday to ask Lin to temporarily pause the Pentagon’s blacklisting and President Donald Trump’s directive banning federal government agencies from using its technology.
The company noted that an injunction would not require the U.S. government to use its models or prevent it from transitioning to another AI vendor.Â
During the hearing, Lin asked lawyers for Anthropic and the U.S. government a number of questions about the details of the case. She said her concern is whether Anthropic is being “punished for criticizing the government’s contracting position in the press.”
“Everyone, including Anthropic, agrees that the Department of War is free to stop using Claude and look for a more permissive AI vendor,” Lin said. “I don’t see that as being what this case is about. I see the question in this case as being a very different one, which is whether the government violated the law.”
Lin said she expects to issue an order on Anthropic’s motion in the next few days.
If the preliminary injunction is awarded, the AI startup would be able to continue doing business with government contractors and federal agencies as its lawsuit against the Trump administration plays out in court. Without it, the company has said in filings that it could lose billions of dollars in business and suffer further reputational harm.
Earlier in March, the Department of Defense designated Anthropic a so-called supply chain risk, meaning that use of the company’s technology purportedly threatens U.S. national security. The label, if allowed to continue, will require defense contractors, including Amazon, Microsoft, and Palantir, to certify that they do not use Claude in their work with the military.
Eric Hamilton, lawyer for the U.S. government, said Tuesday the DOD had “come to worry that Anthropic may in the future take action to sabotage or subvert IT systems,” which is why the company was designated a supply chain risk.
“What happens if anthropic installs a kill switch or functionality that changes how it functions? That is an unacceptable risk,” Hamilton said.
Later in the hearing, Lin tried to press Hamilton about when the DOD views a supply chain risk designation as the appropriate course of action.
“What I’m hearing from you, though, is that it’s enough if an IT vendor is stubborn and insists on certain terms and it asks annoying questions, then it can be designated as a supply chain risk because they might not be trustworthy. That seems a pretty low bar.”
Anthropic has argued that there is no basis to consider the company a supply chain risk.
The company also said it is being unfairly retaliated against because it demanded that the DOD not use Claude for fully autonomous weapons or mass surveillance of Americans. The Pentagon insists it does not use the AI models for such purposes.
“This is something that has never been done with respect to American company,” Anthropic’s lawyer Michael Mongan said during the hearing. “It is a very narrow authority. It doesn’t apply here, and it’s not a normal way to respond to the concerns that have been articulated by the other side.”
Before the conflict erupted in late February, Anthropic was one of the first AI companies to partner with many federal agencies as the government sought to rapidly upgrade its systems and capabilities with cutting-edge AI tech.
Anthropic signed a $200 million contract with the Pentagon in July and was the first AI lab to deploy its technology across the agency’s classified networks.
But as the company began negotiating Claude’s deployment on the DOD’s GenAI.mil AI platform in September, talks stalled over how the military could use the models.
The department has insisted on unfettered access to the company’s technology for all lawful purposes, and Hamilton said Tuesday that Anthropic was going beyond the normal scope of a contractor.
“Anthropic is not just acting stubbornly. It’s not just refusing to agree to contracting terms. Instead, it’s raising concerns to [DOD] about how [DOD] uses its technology in military missions,” Hamilton said.
In February, after Anthropic and the DOD failed to reach an agreement, Trump issued a Truth Social post ordering federal agencies to “immediately cease” all use of Anthropic’s technology.
“WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about,” Trump wrote.
WATCH: Anthropic sues Trump administration over Pentagon blacklisting

CNBC’s Jeff Kopp and Dan Mangan contributed to this story.