US Defense Department Sets Friday Deadline for Anthropic to Lift AI Restrictions, demanding the AI company allow unrestricted military use of its technology. The Pentagon said Anthropic must comply or face enforcement under federal emergency powers. This deadline highlights growing tension between AI safety and military needs.
Anthropic, led by Dario Amodei, has resisted using its Claude AI models for mass surveillance or fully autonomous weapons. The company emphasized safety and ethical AI development, prioritizing responsible technology over rapid deployment. But the Pentagon warned that refusal could label Anthropic a supply chain risk, affecting future government contracts.
The Friday deadline stems from the Defense Production Act, a Cold War-era law granting sweeping powers to prioritize national security. The Pentagon clarified that its orders are lawful and aim to support the US Defense Department’s national security mission. Officials also discussed sensitive areas, including intercontinental ballistic missiles and other military AI applications, increasing pressure on Anthropic to comply.
Other AI firms like OpenAI and Google are reportedly close to similar approvals, while Elon Musk’s Grok system has already cleared classified use. This competitive environment puts Anthropic in a challenging position, balancing ethical AI principles with government demands. Last year, the company joined a $200 million agreement to supply AI for military purposes.
The Friday deadline underscores the urgent need for dialogue between private AI developers and the government. While Anthropic’s commitment to AI safety and ethics is clear, national security requirements are accelerating. How the company responds could reshape future collaboration between AI firms and the Pentagon.







