Anthropic’s Claude AI Emerges as Key Technology in U.S. Iran Campaign Amid Dispute With Pentagon
Advanced artificial intelligence system reportedly aided military targeting and analysis during strikes on Iran as tensions escalate between the U.S. government and its developer
Anthropic’s artificial intelligence system Claude has played a significant role in the United States’ recent military campaign against Iran, even as a fierce dispute between the technology company and the federal government unfolds over how the tool should be used in national security operations.
U.S. military officials have integrated Claude into operational systems used to analyze intelligence, identify potential targets and simulate battlefield scenarios during the escalating confrontation with Iran.
The technology has been embedded in classified defense platforms and has assisted commanders in processing vast amounts of data at speeds far beyond traditional intelligence workflows.
During the opening phase of the campaign, the system reportedly helped military planners rapidly evaluate potential targets and prioritize operations as U.S. and Israeli forces carried out a large number of strikes across Iran.
The technology’s ability to analyze surveillance data, map possible threats and recommend courses of action has highlighted the growing importance of artificial intelligence in modern military strategy.
Claude has been incorporated into advanced defense software through partnerships between Anthropic and major technology and data-analysis firms, enabling it to support decision-making across multiple parts of the military planning cycle.
The system has been used not only to assist with targeting analysis but also to evaluate intelligence reports, test operational scenarios and help commanders coordinate complex operations.
The technology’s central role in the campaign has come at the same time as an increasingly public disagreement between Anthropic and the U.S. Department of Defense over the conditions under which the company’s tools may be deployed.
Anthropic leaders have argued that safeguards are needed to prevent artificial intelligence from being used for mass surveillance or fully autonomous weapons that could act without human oversight.
Defense officials, however, have insisted that the government must retain broad authority to employ advanced technologies for national security purposes.
The disagreement intensified after negotiations over military access to Anthropic’s systems collapsed, prompting federal officials to begin phasing out the company’s technology from government systems while exploring alternatives from other American developers.
Despite the policy shift, the military has continued relying on Claude in ongoing operations because the system is deeply embedded in classified networks and cannot be replaced immediately.
Officials say transitioning to alternative artificial intelligence platforms will require months of technical work to ensure compatibility with existing systems and security standards.
The episode illustrates how rapidly advancing artificial intelligence has become intertwined with geopolitical conflict.
Military planners increasingly rely on AI tools capable of scanning massive data streams, identifying threats and compressing the time between detection and action.
As governments race to harness the technology, the dispute between Anthropic and U.S. defense leaders highlights a broader global debate about who should control powerful AI systems and how they should be used in matters of war and national security.