US senator Elizabeth Warren in a letter to Defense Secretary Pete Hegseth on Anthropic's 'supply chain risk' tag: 'I am particularly concerned…'
Senator Elizabeth Warren has questioned the the Pentagon ’s blacklisting of AI company Anthropic . Warren has reportedly sent formal letter to Defense Secretary Pete Hegseth , questioning the decision of designating Anthropic a “supply chain risk” – a label that effectively bars the company from government work. She said that the decision “appears to be retaliation” for Anthropic’s refusal to grant the military unrestricted access to its AI models.

Warren wrote that The Pentagon “could have chosen to terminate its contract with Anthropic or continued using its technology in unclassified systems”, as per CNBC.
“I am particularly concerned that the DoD is trying to strong-arm American companies into providing the Department with the tools to spy on American citizens and deploy fully autonomous weapons without adequate safeguards,” Warren added.
In the days leading up to the current conflict with Iran, the Pentagon sought unrestricted access to Anthropic's models for all “lawful purposes”, however, Anthropic sought assurances that its technology would not be used for fully autonomous weapons or domestic mass surveillance . The Pentagon refused, and Hegseth publicly directed the DOD to apply the supply chain risk label on February 27.
Warren targets OpenAI and CEO Sam Altman
Hours after Anthropic was blacklisted, OpenAI stepped in with a deal with the Department of War. Warren also wrote to OpenAI CEO Sam Altman raising alarm about the use of artificial intelligence (AI) for mass surveillance and autonomous weapons. She asked for full details of the terms of OpenAI's agreement with the Defense Department.
“I am concerned that the terms of this agreement may permit the Trump Administration to use OpenAI's technology to conduct mass surveillance of Americans and build lethal autonomous weapons that could harm civilians with little to no human oversight,” she wrote.
“Ultimately, it is impossible to assess any safeguards and prohibitions that may exist in OpenAI's agreement with DoD without seeing the full contract, which neither DoD nor OpenAI have made available,” she added.
OpenAI has previously said it is confident the defence department would not use its systems for mass surveillance or fully autonomous weapons, citing its own “safety stack”, existing laws and contract language.
Warren wrote that The Pentagon “could have chosen to terminate its contract with Anthropic or continued using its technology in unclassified systems”, as per CNBC.
“I am particularly concerned that the DoD is trying to strong-arm American companies into providing the Department with the tools to spy on American citizens and deploy fully autonomous weapons without adequate safeguards,” Warren added.
In the days leading up to the current conflict with Iran, the Pentagon sought unrestricted access to Anthropic's models for all “lawful purposes”, however, Anthropic sought assurances that its technology would not be used for fully autonomous weapons or domestic mass surveillance . The Pentagon refused, and Hegseth publicly directed the DOD to apply the supply chain risk label on February 27.
Warren targets OpenAI and CEO Sam Altman
Hours after Anthropic was blacklisted, OpenAI stepped in with a deal with the Department of War. Warren also wrote to OpenAI CEO Sam Altman raising alarm about the use of artificial intelligence (AI) for mass surveillance and autonomous weapons. She asked for full details of the terms of OpenAI's agreement with the Defense Department.
“I am concerned that the terms of this agreement may permit the Trump Administration to use OpenAI's technology to conduct mass surveillance of Americans and build lethal autonomous weapons that could harm civilians with little to no human oversight,” she wrote.
“Ultimately, it is impossible to assess any safeguards and prohibitions that may exist in OpenAI's agreement with DoD without seeing the full contract, which neither DoD nor OpenAI have made available,” she added.
OpenAI has previously said it is confident the defence department would not use its systems for mass surveillance or fully autonomous weapons, citing its own “safety stack”, existing laws and contract language.
Next Story