upper waypoint

Anthropic’s Bid to Lift ‘Supply Chain Risk’ Label Suffers Setback in US Appeals Court

Save ArticleSave Article
Failed to save article

Please try again

Left: Anthropic co-founder and CEO Dario Amodei speaks at INBOUND 2025 on Sept. 4, 2025, in San Francisco, California. Right: Defense Secretary Pete Hegseth listens during a Pentagon briefing on April 8, 2026, in Arlington, Virginia. The artificial intelligence company is fighting the Pentagon over the use of its technology during warfare, as the U.S. government reportedly continues to deploy its Claude model in Iran. (Chance Yeh/Getty Images for HubSpot; Andrew Harnik/Getty Images)

A federal appeals court in Washington on Wednesday denied Anthropic’s request for relief from the Defense Department’s declaration that the company is a supply-chain risk.

The ruling is the latest battle in the multi-front war the U.S. government and one of the country’s leading AI companies are waging with each other — even as they’re also reportedly working with each other in the war with Iran.

A separate court in San Francisco recently blocked President Donald Trump’s broader ban on government use of Anthropic’s model, Claude.

U.S. District Judge Rita F. Lin said the ban “looked like an attempt to cripple Anthropic,” after the company went public about its dispute over the use of Claude by the military.

“Nothing … supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” Lin wrote.

But the three-judge panel in Washington wrote that “the equitable balance here cuts in favor of the government,” though it acknowledged Anthropic will continue to be excluded from new contracts and Pentagon systems.

The Salesforce Tower is seen reflected in windows of 500 Howard Street, where AI firm Anthropic subleased Slack’s office, in downtown San Francisco, California on Oct. 19, 2023. (Loren Elliott for The Washington Post via Getty Images)

The appeals court said that granting a stay would “force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict.”

The court set oral arguments in the case for May 19.

The feud between Anthropic and the Trump administration publicly escalated in February. Following tense behind-the-scenes negotiations and an announcement from CEO Dario Amodei that he would not allow Claude to be used for autonomous weapons or to surveil American citizens, Defense Department officials responded with a series of punishments.

Anthropic’s complaints lean heavily on statements by Pentagon officials on social media, including posts by Trump, Defense Secretary Pete Hegseth and others, as “evidence of ideological motivation,” as well as “arbitrary, capricious and an abuse of discretion.”

Trump called Anthropic “a radical left, woke company” populated by “leftwing nut jobs,” and Hegseth attacked the company as arrogant and duplicitous. Anthropic’s lawyers argued these posts expose the ideological, rather than national security, motivation behind the government’s actions.

That said, the Wall Street Journal reported that the Defense Department continues to use Claude in the war in Iran.

“We’re grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful,” an Anthropic spokesperson wrote KQED following the appeals court decision in Washington on Wednesday. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

Noted AI scientist and skeptic Gary Marcus said he favored Anthropic’s chances, and that the government’s supply chain risk designation “made no sense.”

The Anthropic logo is displayed on a smartphone screen on March 31, 2026. (Jonathan Raa/NurPhoto)

“At the moment Anthropic seems to have something of a technical lead, and it would just be cutting off DoD’s nose to spite their face to exclude them. Especially in wartime, that’s just ridiculous,” he told KQED by email.

In recent weeks, OpenAI swooped in to claim the $200 million contract Anthropic was negotiating for with the Defense Department. But the deal likely cost Anthropic’s rival more than that in subscriber defections alone. A website where people pledged to cancel their subscriptions claims OpenAI lost 1.5 million paying users, as the company faces an estimated $14 billion loss in operational costs.

“The DoD contract is small potatoes in itself,” UC Berkeley AI pioneer Stuart Russell wrote. The real play, he argued, is indispensability. “I think the intent was to make OpenAI indispensable to the government, raising the likelihood of a bailout (a possibility suggested by OpenAI last year).”

The “supply chain risk” designation for Anthropic? “I assume it will eventually be rescinded,” Russell said.

lower waypoint
next waypoint
Player sponsored by