Dario Amodei mentioned Thursday that Anthropic plans to problem the Division of Protection’s resolution to label the AI agency a supply-chain danger in court docket, a designation he has referred to as “legally unsound.”
The assertion comes just a few hours after the DOD formally designated Anthropic a supply-chain danger following a weeks-long dispute over how a lot management the navy ought to have over AI programs. A supply-chain danger designation can bar an organization from working with the Pentagon and its contractors. Amodei drew a agency line that Anthropic’s AI is not going to be used for mass surveillance of People or for totally autonomous weapons, however the Pentagon believed it ought to have unrestricted entry for “all lawful functions.”
In his assertion, Amodei mentioned the overwhelming majority of Anthropic’s clients are unaffected by the supply-chain danger designation.
“With respect to our clients, it plainly applies solely to using Claude by clients as a direct a part of contracts with the Division of Struggle, not all use of Claude by clients who’ve such contracts,” he mentioned.
As a preview of what Anthropic will possible argue in court docket, Amodei mentioned the Division’s letter labeling the agency a supply-chain danger is slender in scope.
“It exists to guard the federal government somewhat than to punish a provider; the truth is, the legislation requires the Secretary of Struggle to make use of the least restrictive means crucial to perform the purpose of defending the availability chain,” Amodei mentioned. “Even for Division of Struggle contractors, the availability chain danger designation doesn’t (and may’t) restrict makes use of of Claude or enterprise relationships with Anthropic if these are unrelated to their particular Division of Struggle contracts.”
Amodei reiterated that Anthropic had been having productive conversations with the DOD during the last a number of days, conversations that some suspect received derailed when an inside memo he despatched to workers was leaked. In it, Amodei characterised rival OpenAI’s dealings with the Division of Protection as “security theater.”
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
OpenAI has signed a deal to work with the DOD in Anthropic’s place, a transfer that has sparked backlash amongst OpenAI workers.
Amodei apologized for the leak in his Thursday assertion, claiming that the corporate didn’t deliberately share the memo or direct anybody else to take action. “It’s not in our curiosity to escalate the state of affairs,” he mentioned.
Amodei mentioned the memo was written inside “just a few hours” of a collection of bulletins, together with a presidential Fact Social submit saying Anthropic can be faraway from federal programs, then Protection Secretary Pete Hegseth’s supply-chain danger designation, and eventually the Pentagon’s deal announcement with OpenAI. He apologized for the tone, calling it “a tough day for the corporate” and mentioned the memo didn’t mirror his “cautious or thought of views.” Written six days in the past, he added, it’s now an “out-of-date evaluation.”
He completed by saying Anthropic’s high precedence is to make sure American troopers and nationwide safety specialists preserve entry to essential instruments in the course of ongoing main fight operations. Anthropic is at the moment supporting a few of the U.S.’s operations in Iran, and Amodei mentioned the corporate would proceed to supply its fashions to the DOD at “nominal value” for “so long as essential to make that transition.”
Anthropic might problem the designation in federal court docket, possible in Washington, however the legislation behind the choice makes it tougher to contest as a result of it limits the standard methods corporations can problem authorities procurement choices and offers the Pentagon broad discretion on nationwide safety issues.
Or as Dean Ball — a former Trump-era White Home adviser on AI who has spoken out towards Hegseth’s therapy of Anthropic — put it: “Courts are fairly reluctant to second-guess the federal government on what’s and isn’t a nationwide safety difficulty … There’s a really excessive bar that one must clear with a view to do this. Nevertheless it’s not unimaginable.”
