U.S. Intelligence Agencies Push to Shape AI Oversight in Growing Rift With Commerce Department
Internal U.S. government tensions are emerging over who should control AI safety and security testing — intelligence agencies or the Commerce Department — as Washington moves toward tighter pre-deployment scrutiny of frontier models.
A structural power struggle inside the U.S. government over artificial intelligence oversight is intensifying, with intelligence agencies seeking greater authority over AI model evaluation while the Commerce Department tries to retain its traditional role in technical standards and industry coordination.
At the center of the dispute is how the United States should regulate and assess frontier AI systems before they are released to the public.
What is confirmed is that the Commerce Department, through its National Institute of Standards and Technology and its Center for AI Standards and Innovation, has been building a framework for voluntary pre-deployment testing of advanced AI models developed by major technology companies.
These evaluations focus on cybersecurity, biosecurity, and other national security risks, and have involved collaborations with leading firms such as Microsoft, Google, and xAI.
Intelligence agencies, led by elements of the Office of the Director of National Intelligence and related national security bodies, are now pushing for a stronger role in that process.
Their argument is grounded in the view that frontier AI systems increasingly intersect with classified threat analysis, cyber operations, and adversarial intelligence, making them too sensitive to remain primarily under a civilian commerce and standards framework.
The Commerce Department’s approach has historically emphasized voluntary industry participation, technical benchmarking, and the development of shared safety standards.
That model was reinforced after earlier government initiatives created a formal AI safety evaluation structure within NIST, later rebranded and expanded under the current administration into a broader standards and innovation center.
The intelligence community’s push introduces a different logic: treating advanced AI models less as commercial technologies requiring safety certification and more as dual-use national security systems requiring classified oversight, restricted testing environments, and potentially mandatory pre-release review.
This divergence reflects a broader shift in Washington’s AI policy landscape.
Recent government-industry agreements have already given federal evaluators early access to major AI models before public release, allowing them to test systems for vulnerabilities that could enable cyberattacks or other malicious uses.
At the same time, the political direction of AI policy has become less uniform, with competing priorities between accelerating innovation, limiting regulatory burdens, and strengthening national security controls.
The stakes are significant because whoever controls the evaluation pipeline effectively influences how fast AI systems reach the market and how strictly they are constrained before deployment.
If intelligence agencies gain greater authority, AI companies could face tighter pre-release scrutiny and more sensitive security-driven restrictions.
If Commerce retains control, the system is likely to remain more industry-led and standards-based, with voluntary compliance at its core.
The dispute also reflects deeper institutional tension over expertise and jurisdiction.
Commerce agencies argue that technical standards bodies are better positioned to evaluate model behavior and coordinate with industry.
Intelligence officials counter that only classified national security institutions can fully assess the risks posed by advanced systems that may be used in cyber warfare, espionage, or strategic influence operations.
As the debate continues inside the executive branch, the United States is effectively shaping two competing visions of AI governance: one centered on open technical evaluation and industrial collaboration, and another centered on national security containment and intelligence-driven oversight.
The eventual resolution will determine whether AI regulation in the United States becomes a standards-based industrial framework or a security-classified control regime embedded within the intelligence community.