US Agencies Pivot to OpenAI as Trump Labels Anthropic a Supply-Chain Risk
Key Takeaways
- The US State Department, Treasury, and FHFA are terminating all contracts with Anthropic following a directive from President Donald Trump.
- The State Department is transitioning its 'StateChat' platform to OpenAI's GPT-4.1, while the Pentagon has designated Anthropic a 'supply-chain risk' after disputes over technology guardrails.
Mentioned
Key Intelligence
Key Facts
- 1President Trump ordered all US government agencies to terminate contracts with Anthropic, including its Claude platform.
- 2The US State Department is transitioning its 'StateChat' tool to OpenAI's GPT-4.1 model.
- 3The Pentagon has officially designated Anthropic as a 'supply-chain risk' following a dispute over AI guardrails.
- 4The US Treasury and FHFA confirmed the immediate termination of Anthropic products on March 2, 2026.
- 5A six-month phase-out period has been established for the Defense Department and other critical agencies.
- 6OpenAI secured a new deal to deploy AI technology within the Pentagon's classified network.
| Metric | ||
|---|---|---|
| Gov Status | Supply-Chain Risk | Preferred Provider |
| Primary Product | Claude | GPT-4.1 |
| Key Agency | None (Phasing Out) | State Dept, Pentagon |
| Network Access | Barred | Classified Network Deal |
Analysis
The landscape of federal AI procurement has undergone a seismic shift as the Trump administration moves to blacklist Anthropic, one of the world’s leading artificial intelligence safety labs. In an unprecedented executive maneuver, President Donald Trump directed government agencies to terminate all work with the San Francisco-based startup, effectively labeling a domestic technology leader as a 'supply-chain risk.' This designation, typically reserved for foreign adversaries or compromised hardware suppliers, marks a dramatic escalation in the ideological and technical rift between the administration and AI labs that prioritize stringent safety guardrails.
The immediate beneficiary of this policy shift is OpenAI. A leaked internal memo confirms that the US State Department is already transitioning its internal chatbot, StateChat, from Anthropic’s Claude models to OpenAI’s GPT-4.1. This transition is not isolated to the State Department; the US Treasury and the Federal Housing Finance Agency (FHFA) have also confirmed the total cessation of Anthropic product usage. For Anthropic, which has positioned itself as the 'safety-first' alternative to OpenAI, this move represents a catastrophic loss of institutional trust and market share within the public sector.
A leaked internal memo confirms that the US State Department is already transitioning its internal chatbot, StateChat, from Anthropic’s Claude models to OpenAI’s GPT-4.1.
The Pentagon’s role in this transition is particularly telling. By declaring Anthropic a supply-chain risk, the Department of Defense is signaling that the company’s internal 'guardrails'—often referred to as Constitutional AI—are now viewed as a liability rather than an asset for national security. While the specific nature of the 'showdown' over these guardrails remains classified, the administration's preference for OpenAI’s technology suggests a pivot toward models that may offer more flexibility or alignment with the executive branch’s specific operational requirements. OpenAI further solidified its position by announcing a new deal to deploy its technology within the Defense Department’s classified networks, creating a near-monopoly on high-stakes government AI applications.
What to Watch
The operational impact of this directive will be felt across the federal ecosystem over the next six months. Agencies including the Treasury, led by Secretary Scott Bessent, and the FHFA, under Director William Pulte, are moving rapidly to purge Anthropic’s Claude platform from their workflows. This includes major government-sponsored enterprises like Fannie Mae and Freddie Mac, which rely heavily on AI for mortgage processing and risk assessment. The six-month phase-out period granted to the Defense Department suggests that while the mandate is absolute, the technical complexity of swapping out large language model (LLM) backends requires a structured transition to avoid service interruptions.
Looking forward, this development sets a chilling precedent for the AI industry. It suggests that 'safety' and 'alignment'—the very pillars upon which Anthropic was founded—can be reinterpreted as political or security risks by the state. As OpenAI deepens its integration with the Pentagon and State Department, the industry may see a consolidation of power where government-approved AI providers are chosen not just for their technical capabilities, but for their willingness to conform to the administration's specific vision for AI deployment. For Anthropic, the challenge will be to survive as a commercial entity while being effectively barred from the world’s largest purchaser of technology services.
Timeline
Timeline
Executive Directive
President Trump orders government agencies to stop all work with Anthropic.
Pentagon Risk Label
The Defense Department declares Anthropic a supply-chain risk.
Treasury & FHFA Exit
Secretary Scott Bessent and Director William Pulte confirm termination of Anthropic usage.
State Department Transition
Internal memo reveals StateChat is switching to OpenAI's GPT-4.1.
Phase-out Deadline
Final deadline for the Defense Department to complete the transition away from Anthropic technology.