Monday, March 2, 2026
HomeGadgetTech employees urge DOD, Congress to withdraw Anthropic label as a provide...

Tech employees urge DOD, Congress to withdraw Anthropic label as a provide chain danger

Tons of of tech employees have signed an open letter urging the Division of Protection to withdraw its designation of Anthropic as a “provide chain danger.” The letter additionally calls on Congress to step in and “study whether or not the usage of these extraordinary authorities towards an American know-how firm is suitable.”

The letter contains signatories from main know-how and enterprise capital corporations together with OpenAI, Slack, IBM, Cursor, Salesforce Ventures, and extra. It follows a dispute between the DOD and Anthropic after the AI lab final week refused to present the navy unrestricted entry to its AI programs. 

Anthropic’s two purple strains in its negotiations with the Pentagon have been that it didn’t need its know-how for use for mass surveillance on People or to energy autonomous weapons that made concentrating on and firing selections and not using a human within the loop. The DOD stated it had no plans to do both of these issues, however that it didn’t consider it ought to be restricted by the principles of a vendor. 

In response to Anthropic CEO Dario Amodei’s refusal to cave to Hegseth’s threats, President Donald Trump on Friday directed federal businesses to cease utilizing Anthropic’s know-how after a six-month transition interval. Hegseth stated he would make good on his threats and designate Anthropic a provide chain danger — a designation usually reserved for international adversaries that might blacklist the AI agency from working with any company or firm that does enterprise with the Pentagon. 

In a put up on Friday, Hegseth wrote: “Efficient instantly, no contractor, provider, or associate that does enterprise with the US navy might conduct any business exercise with Anthropic.” 

However a put up on X doesn’t robotically make Anthropic a provide chain danger. The federal government wants to finish a danger evaluation and notify Congress earlier than navy companions have to chop ties with Anthropic or its merchandise. Anthropic stated in a weblog put up the vacation spot is each “legally unsound” and that it will “problem any provide chain danger designation in courtroom.”

Many within the trade see the administration’s remedy of Anthropic as harsh and clear retaliation. 

Techcrunch occasion

San Francisco, CA
|
October 13-15, 2026

“When two events can’t agree on phrases, the conventional course is to half methods and work with a competitor,” the open letter reads. “This example units a harmful precedent. Punishing an American firm for declining to simply accept modifications to a contract sends a transparent message to each know-how firm in America: settle for no matter phrases the federal government calls for, or face retaliation.” 

Past concern over the federal government’s harsh remedy of Anthropic, many within the trade are nonetheless involved about potential authorities overreach and use of AI for nefarious functions. 

Boaz Barak, an OpenAI researcher, wrote in a social media put up on Monday that blocking governments from utilizing AI to do mass surveillance can be his “private purple line” and “it ought to be all of ours.”

Moments after Trump publicly attacked Anthropic, OpenAI introduced it had reached a deal of its personal for its fashions to be deployed within the DOD’s categorised environments. OpenAI CEO Sam Altman stated final week that the agency has the identical purple strains as Anthropic.

“If something good can come out of the occasions of the final week, it will be if we within the AI trade begin treating the problem of utilizing AI for presidency abuse and surveilling its personal individuals as a catastrophic danger of its personal proper,” Barak wrote. “Now we have finished an excellent job of evaluations, mitigations, and processes, for dangers resembling bioweapons and cyber safety. Let’s use comparable processes right here.”

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments