Shopping cart
Your cart empty!
Terms of use dolor sit amet consectetur, adipisicing elit. Recusandae provident ullam aperiam quo ad non corrupti sit vel quam repellat ipsa quod sed, repellendus adipisci, ducimus ea modi odio assumenda.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Sequi, cum esse possimus officiis amet ea voluptatibus libero! Dolorum assumenda esse, deserunt ipsum ad iusto! Praesentium error nobis tenetur at, quis nostrum facere excepturi architecto totam.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Inventore, soluta alias eaque modi ipsum sint iusto fugiat vero velit rerum.
Sequi, cum esse possimus officiis amet ea voluptatibus libero! Dolorum assumenda esse, deserunt ipsum ad iusto! Praesentium error nobis tenetur at, quis nostrum facere excepturi architecto totam.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Inventore, soluta alias eaque modi ipsum sint iusto fugiat vero velit rerum.
Dolor sit amet consectetur adipisicing elit. Sequi, cum esse possimus officiis amet ea voluptatibus libero! Dolorum assumenda esse, deserunt ipsum ad iusto! Praesentium error nobis tenetur at, quis nostrum facere excepturi architecto totam.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Inventore, soluta alias eaque modi ipsum sint iusto fugiat vero velit rerum.
Sit amet consectetur adipisicing elit. Sequi, cum esse possimus officiis amet ea voluptatibus libero! Dolorum assumenda esse, deserunt ipsum ad iusto! Praesentium error nobis tenetur at, quis nostrum facere excepturi architecto totam.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Inventore, soluta alias eaque modi ipsum sint iusto fugiat vero velit rerum.
Do you agree to our terms? Sign up
The US Department of Defense is intensifying efforts to secure broader access to advanced artificial intelligence systems for military use, prompting high-level talks between Defence Secretary Pete Hegseth and Anthropic CEO Dario Amodei. The meeting reflects growing pressure on AI developers to allow their most powerful models to operate inside classified military networks with fewer restrictions.
At the centre of the dispute is Anthropic’s AI system Claude, one of the few frontier AI models deployed in sensitive defence and intelligence environments. Pentagon officials are seeking expanded operational use, while the company has insisted on maintaining safety guardrails governing how its technology can be used.
US defence planners view advanced AI as critical for intelligence analysis, battlefield decision support, and cyber operations. The Pentagon has been urging major AI companies to allow their systems to be used for “all lawful purposes,” including defence operations conducted on classified networks.
Claude’s performance in handling complex secure data and analytical tasks has made it a valuable tool within defence systems. However, restrictions imposed by Anthropic have limited certain applications, creating friction between military objectives and AI safety policies.
Anthropic has positioned itself as one of the most safety-focused AI companies, placing limits on uses involving autonomous weapons, surveillance, or violent applications.
CEO Dario Amodei has repeatedly warned about risks associated with deploying powerful AI without safeguards, highlighting concerns about misuse and escalation.
This safety-first approach has created a policy clash with defence officials who argue that operational flexibility is essential for national security.
Frustration over restrictions has led the Pentagon to explore alternatives and diversify its AI partnerships. Elon Musk’s xAI recently secured a deal to deploy its Grok model in classified military systems, marking a significant shift in defence AI strategy.
Officials have also considered designating Anthropic a “supply chain risk,” a move that could restrict its technology from defence ecosystems.
Reports indicate that Claude has been used in high-security operations, underscoring the growing role of AI in defence decision-making and intelligence workflows.
The expanding use of AI in national security highlights both its strategic value and ethical complexities.
The Pentagon–Anthropic dispute reflects a broader divide between national security priorities and Silicon Valley’s AI safety concerns. While defence agencies emphasize strategic advantage and operational speed, AI developers warn that removing safeguards could lead to misuse, escalation risks, and erosion of ethical standards.
As global powers accelerate military AI adoption, the outcome of this dispute could influence future defence technology partnerships and shape international norms around AI use in warfare.
24
Published: 9h ago