Microsoft has formally joined Anthropic in a legal challenge against the U.S. Department of Defense, urging a federal judge to freeze specific AI procurement initiatives. The rare alliance between the tech giant and the AI startup aims to challenge restrictive Pentagon policies that could limit the diversity of AI models used in national security.
Microsoft has formally backed Anthropic in its legal battle against the U.S. Department of Defense over a decision to blacklist the AI startup's technology from military use. The move highlights a rare alignment between tech giants and AI labs to challenge federal oversight and ensure commercial access to lucrative defense contracts.
Microsoft has filed an amicus brief supporting Anthropic's lawsuit against the Pentagon, warning that blacklisting the AI firm as a national security risk could cripple U.S. military capabilities and the broader AI ecosystem. The dispute centers on Anthropic's refusal to allow its Claude models to be used for lethal autonomous warfare.
Anthropic executives have issued a stark warning that a potential blacklisting by the U.S. Department of Defense could jeopardize billions of dollars in future sales. The company claims such a move would not only cause severe financial distress but also inflict lasting damage on its reputation as a leader in safe and reliable AI.
Anthropic has filed a federal lawsuit against the Trump administration to overturn a 'supply chain risk' designation that effectively bans its AI models from use by the Department of Defense. The legal challenge marks a significant escalation in the friction between safety-focused AI labs and the federal government's defense procurement policies.
Anthropic has filed a federal lawsuit against the Trump administration, seeking to overturn a Department of Defense order that labels the AI firm a 'supply chain risk.' The legal challenge marks a significant escalation in the conflict between the safety-focused AI developer and the Pentagon's aggressive new national security vetting policies.
Caitlin Kalinowski, OpenAI’s head of robotics, has resigned following the company's agreement to deploy AI models within the Pentagon's classified networks. Her departure highlights growing internal friction over the ethical boundaries of military AI, specifically regarding domestic surveillance and autonomous weaponry.
Pentagon Tech Chief Emil Michael has publicly criticized AI startup Anthropic, signaling a growing rift between defense requirements and the ethical guardrails of safety-first AI labs. Michael emphasized the need for partners who will not 'wig out' when faced with the realities of autonomous drone systems and AI-driven weaponry.
The Trump administration is drafting strict new AI procurement rules requiring companies to permit 'any lawful' use of their models by the government. This regulatory shift follows the Pentagon's designation of Anthropic as a 'supply-chain risk' due to a dispute over safety guardrails.
OpenAI CEO Sam Altman has publicly acknowledged that the company's recent partnership with the U.S. Department of Defense was "rushed," leading to perceptions of being "opportunistic and sloppy." Despite the admission of poor optics, the company is doubling down on the collaboration while attempting to clarify the scope of its military involvement.
The US State Department, Treasury, and FHFA are terminating all contracts with Anthropic following a directive from President Donald Trump. The State Department is transitioning its 'StateChat' platform to OpenAI's GPT-4.1, while the Pentagon has designated Anthropic a 'supply-chain risk' after disputes over technology guardrails.
The Trump administration has designated Anthropic a "supply-chain risk" and banned federal agencies from using its technology following a standoff over AI safety guardrails. This unprecedented move effectively bars defense contractors from working with the startup, potentially reshaping the competitive landscape of the frontier AI market.
President Donald Trump has directed all U.S. government agencies to terminate their use of Anthropic's AI technology, following a Pentagon declaration labeling the startup a supply-chain risk. The move initiates a six-month phase-out period and follows a high-profile dispute over the company's safety guardrails.
President Donald Trump has issued a directive for all federal agencies to immediately cease using Anthropic's AI technology following a dispute with the Pentagon. The move marks a significant escalation in the administration's intervention in the AI sector, effectively blacklisting one of the industry's leading labs.
The Pentagon has issued a formal clarification stating that the US military's use of Anthropic’s AI technology will be strictly governed by international and domestic legal frameworks. This move highlights the growing integration of advanced large language models into defense operations while addressing ethical concerns surrounding autonomous systems.
The U.S. Department of Defense has issued a formal ultimatum to Anthropic, leveraging the Defense Production Act to compel cooperation on national security initiatives. This escalation follows CEO Dario Amodei’s public reservations regarding the ethical implications of unchecked military AI deployment.
US Defense Secretary Pete Hegseth has threatened to terminate Anthropic’s $200 million military contract by Friday if the company refuses to lift restrictions on its AI for autonomous targeting and domestic surveillance. The standoff marks a significant escalation in the clash between Silicon Valley's safety-focused AI firms and the Pentagon's push for unrestricted technological integration.