A serious disagreement is unfolding between the Pentagon and the artificial intelligence company Anthropic over how advanced AI systems should be used in military operations. The conflict centers on restrictions placed on Claude, the AI model developed by Anthropic, and how those limits affect defense activities.
The Pentagon is reportedly close to cutting its ties with Anthropic. This move is being considered because Anthropic has placed firm restrictions on how Claude can be used by the military. These limits are meant to prevent the AI from being used in harmful or unethical ways.
One of the most severe actions under discussion is labeling Anthropic as a “supply chain risk.” This designation is typically reserved for foreign adversaries or companies viewed as threats to national security. If the label is applied, all U.S. defense contractors would be required to immediately stop working with Anthropic.
Such a decision would have major consequences. Anthropic would lose a significant portion of its defense-related business. At the same time, the Pentagon would lose access to an AI system already operating within parts of its classified infrastructure.
This dispute highlights growing tension between national security demands and private companies that develop powerful technologies but insist on controlling how those tools are used.
KPMG partner fined $10,000 after using AI to cheat on AI ethics exam
Dispute Over Restrictions and Ethical Limits
At the heart of the conflict is a disagreement over authority and responsibility. Defense officials are demanding the ability to use AI for “all lawful purposes.” This phrase is broad and includes activities such as intelligence analysis, operational planning, and surveillance-related tasks.
Anthropic has refused to grant such wide permission. While the company has indicated it is open to loosening some restrictions, it has drawn clear boundaries. Anthropic does not want Claude used for spying on Americans. It also does not want the AI involved in building or supporting autonomous weapons that can operate without direct human control.
These limits are part of Anthropic’s responsible-use policy. The company believes that allowing unrestricted military use could lead to serious harm and undermine public trust in artificial intelligence.
Defense officials argue that these restrictions interfere with operational needs. They believe national security work requires flexibility and unrestricted access to advanced tools. According to reports, this disagreement has escalated beyond internal discussions and is now threatening the entire working relationship between the Pentagon and Anthropic.
The situation also raises concerns across the defense contractor ecosystem. If Anthropic is officially labeled a supply chain risk, contractors using Claude would be forced to abandon the technology, even if it is deeply embedded in their systems.
Classified Systems and Rising Concerns About AI in Warfare
The conflict has drawn even more attention because Claude is reportedly the only AI system currently approved for use on certain classified Pentagon systems. This makes the technology especially important to ongoing defense operations.
According to reporting by Axios, Claude was also reportedly used through Palantir, a defense-focused data analytics company, during an operation linked to the capture of Nicolás Maduro earlier this year. While this claim has not been independently verified, it has added to the sensitivity surrounding the dispute.
The report underscores how deeply artificial intelligence is already involved in real-world military activities. It also explains why Anthropic is determined to control how its technology is deployed.
Experts have long warned about the dangers of unchecked AI use in warfare. These concerns include errors caused by flawed data, lack of human oversight, and the potential for AI systems to be used in ways that violate rights or laws.
The current standoff highlights a growing challenge. Private technology companies now build tools that governments depend on. When those companies attempt to enforce ethical rules, they may face pressure if their policies conflict with military demands.
This situation reflects a broader struggle over who controls powerful technologies and how far ethical boundaries can extend when national security is involved. The facts show a clear divide between the Pentagon’s operational requirements and Anthropic’s insistence on responsible AI use, with Claude at the center of the dispute.

