rotating globe
28 Feb 2026


Trump orders halt to Anthropic AI in federal agencies

Dispute over military safeguards escalates into major policy clash

US President Donald Trump has directed federal agencies to stop using artificial intelligence systems developed by Anthropic, escalating a growing clash between the White House and the technology sector over how AI should be used in government.

The order requires departments to begin phasing out Anthropic’s tools, even where they are already embedded in workflows. Officials say the move follows a breakdown in talks between the administration and the company over restrictions placed on the use of its AI models, particularly in defence-related settings.

At the centre of the dispute are limits built into Anthropic’s flagship AI system, Claude. The company has insisted on guardrails designed to prevent uses such as mass surveillance or fully autonomous weapons. Administration officials, including representatives from the Department of Defense, argue that such restrictions interfere with legitimate national security needs.

Trump framed the decision as a matter of executive authority, saying the federal government should not be constrained by private firms when it comes to defence and security. The administration is also reviewing whether Anthropic could face additional contractual or regulatory consequences.

Anthropic’s chief executive, Dario Amodei, has defended the company’s stance, saying its safety standards are fundamental to responsible AI development. He has signalled that the firm is prepared to challenge any punitive action, arguing that ethical boundaries are not optional features but core safeguards.

The confrontation has drawn attention across Silicon Valley, where AI companies are increasingly partnering with government agencies. Competitors, including OpenAI, are closely monitoring the situation as they negotiate their own federal contracts.

Also Read: Israel strikes Lebanon, Tehran blast raises fears