Anthropic Refuses Pentagon Demand to Remove Safeguards on Claude AI

Anthropic Refuses Pentagon Demand to Remove Safeguards on Claude AI

Artificial intelligence company Anthropic has declined a request from the US Department of Defense to remove safeguards from its AI system, Claude. CEO Dario Amodei said the company will not permit unrestricted military use of its technology, particularly in areas such as fully autonomous weapons and mass domestic surveillance.

The dispute reportedly began after the Pentagon asked AI contractors to agree to “any lawful use” of their systems. Anthropic responded by stating that certain applications, even if lawful, could cross ethical and technical boundaries. Amodei emphasised that while the company supports national security efforts, it will not compromise on its safety principles.

Support for Defense — With Ethical Limits

Anthropic has been actively involved in supporting US government operations. Claude is currently used within classified government networks and National Laboratories for tasks such as intelligence analysis, operational planning, cyber operations, and advanced modeling. According to the company, it was among the first frontier AI firms to deploy models in secure government environments.

Amodei stated that he strongly believes in using AI to defend democratic nations and counter authoritarian threats. However, he drew clear lines around two specific use cases.

The first involves mass domestic surveillance. Anthropic argues that AI systems capable of combining vast datasets — including browsing history, location data, and personal associations — could create intrusive monitoring capabilities at scale. The company said such uses would be incompatible with democratic values.

The second red line concerns fully autonomous weapons. While acknowledging that AI may eventually support defense systems, Amodei said current frontier models are not reliable enough to remove human oversight from targeting decisions. He warned that deploying AI without human control could pose risks to both military personnel and civilians.

Pentagon Pressure and Contract Risk

Reports indicate that the Pentagon has warned Anthropic it could lose its government contract if it refuses to remove the safeguards. There have also been suggestions of invoking measures such as the Defense Production Act or labeling the company a supply chain risk.

Despite the tension, Anthropic has expressed willingness to collaborate with defense officials on improving AI reliability and developing safe deployment frameworks. However, the company has stated it will not provide a product that could be used for unrestricted military purposes.

The situation highlights a growing debate in the AI sector about balancing national security needs with ethical responsibility. As governments increasingly rely on advanced AI systems, companies are facing complex decisions about how their technologies should be deployed.

Anthropic has reiterated that it remains committed to supporting national security while upholding strict safety standards, even if that stance comes at the cost of lucrative government contracts.

Prev Article
Jack Dorsey’s Block Cuts 4,000 Jobs in AI Overhaul, Workforce Slashed by Nearly Half
Next Article
IDFC First Bank Returns Rs 583 Crore After Fraud; Can Trust Be Rebuilt?

Related to this topic: