Judge gives Pentagon 'Russia and China' reason in Anthropic case what many have been giving since 'Day One'
A federal judge in San Francisco has now criticised the Pentagon’s move to label AI giant Anthropic as a “supply chain risk”. As reported by Business Insider, at a hearing on March 24, Judge Rita Lin said that the designation of Anthropic as national security risk appeared less like a national security message and more like an attempt to “cripple Anthropic”. Defense secretary Pete Hegseth had formally notified Anthropic that the company and its products would be blacklisted. This is the first time that any US company is designated as a “supply chain risk”. This label restricts the company from attaining government contracts and also limits the use of Anthropic’s technology.

Clash between Anthropic and Pentagon
Anthropic CEO Dario Amodei earlier refused the demands of Pentagon for unfettered access to its AI models “for any lawful use”, citing concerns that such language could enable surveillance of Americans or deployment of autonomous weapons before safeguards were ready.
Judge Lin also noted that the “supply chain risk” designation is typically reserved for adversities like Russia or China, not domestic companies. “DOW could just stop using Claude,” she said, referring to the Department of War, the Trump administration’s preferred name for the Pentagon. “It looks like they went further than that because they were trying to punish Anthropic.”
Donald Trump imposed broad ban on Anthropic
The case also caters to a separate order from President Donald Trump, posted on Truth Social, directing all federal agencies to cease using Anthropic within six months. Lin further noted that sweeping scope of that order, saying it could even apply to agencies like the National Endowment for the Arts that might use Claude for website design.
Anthropic argued in filings that the designation jeopardizes “hundreds of millions of dollars in the near-term” and undermines its reputation and First Amendment rights. The Justice Department countered that the designation must remain due to “future risk” of how Anthropic could update its models.
Judge Lin is weighing whether to lift the ban while the case proceeds to trial. The outcome could set a precedent for how far the federal government can go in restricting AI vendors under national security powers.
Clash between Anthropic and Pentagon
Anthropic CEO Dario Amodei earlier refused the demands of Pentagon for unfettered access to its AI models “for any lawful use”, citing concerns that such language could enable surveillance of Americans or deployment of autonomous weapons before safeguards were ready.
Judge Lin also noted that the “supply chain risk” designation is typically reserved for adversities like Russia or China, not domestic companies. “DOW could just stop using Claude,” she said, referring to the Department of War, the Trump administration’s preferred name for the Pentagon. “It looks like they went further than that because they were trying to punish Anthropic.”
Donald Trump imposed broad ban on Anthropic
The case also caters to a separate order from President Donald Trump, posted on Truth Social, directing all federal agencies to cease using Anthropic within six months. Lin further noted that sweeping scope of that order, saying it could even apply to agencies like the National Endowment for the Arts that might use Claude for website design.
Anthropic argued in filings that the designation jeopardizes “hundreds of millions of dollars in the near-term” and undermines its reputation and First Amendment rights. The Justice Department countered that the designation must remain due to “future risk” of how Anthropic could update its models.
Judge Lin is weighing whether to lift the ban while the case proceeds to trial. The outcome could set a precedent for how far the federal government can go in restricting AI vendors under national security powers.
Next Story