Artificial intelligence is a major subtheme of the U.S. intelligence community’s annual report on threats—one increasingly described in strategic, not just technical, terms.
In its 2026 Worldwide Threat Assessment, released on Wednesday, the Office of the Director of National Intelligence calls AI a “defining technology for the 21st century,” notes that it is being used in combat, and identifies China as “the most capable competitor” to the United States. The assessment, released on Wednesday as intelligence leaders testified to lawmakers, offers a rare window into how they interpret the global threat landscape.
The new version of the annual report treats AI far more prominently than in 2024 and 2025. It gives AI a larger role in the report—but one that resists easy categorization. Unlike enduring threats from China, Russia, Iran, North Korea, and terrorist groups, AI is treated less as a discrete actor or capability and more as a cross-cutting force shaping each of them.
The 2024 report, for instance, describes AI as “moving into its industrial age,” noting its potential for economic benefit and disruption, but also the hypothetical development of new “chemical weapons” and materials that could make China’s or Russia’s military more competitive. It also notes that authoritarian regimes might use AI to generate fake content and as a tool for mass surveillance and coercion of their own populations.
“During the next several years, governments are likely to exploit new and more intrusive technologies—including generative AI—for transnational repression,” it says.
That trend is well underway. AI-created misinformation and disinformation have proliferated across global social media, often supported by China, Russia, and other authoritarian regimes, and often at the expense of the U.S. government, military, or other institutions.
The 2025 report took note of Russian deepfakes but didn’t describe the intent or consequences. The authors were more concerned about Moscow’s pioneering use of AI: on the battlefield, particularly in anti-drone efforts. They also highlighted China’s “multifaceted, national-level strategy” to displace the United States as the “most influential AI power by 2030.”
Over the past year, AI has seized a growing share of public attention, private investment, and White House and Defense Department focus. While the Pentagon has used it for intelligence analysis since 2017, the new threat report notes that AI “has been used in recent conflicts to influence targeting and streamline decision-making, marking a significant shift in the nature of modern warfare.”
It reiterates its predecessors’ emphasis on the importance of U.S. dominance in AI technology while also noting that “other global powers’ robust progress in AI is challenging U.S. economic competitiveness and national security advantages.” In particular, it says, “China is driving AI adoption at scale—both domestically and internationally—by using its sizable talent pool, extensive datasets, government funding, and burgeoning global partnerships.”
There is also a special warning about the use of autonomy in warfare. AI carries risks that require careful human engineering to mitigate the dangers of AI autonomy before they are broadly deployed.
At the Wednesday hearing before the Senate Intelligence Committee, Tulsi Gabbard, Director of National Intelligence Tulsi Gabbard said that a China-run data-extortion operation last August foretold the future: the perpetrators used “an AI tool” to extort ”international government, healthcare, public health, emergency services sectors, and religious institutions.”
What’s missing
Missing from today’s hearing and the new report is any meaningful mention of AI’s role in election interference, disinformation, and the advancement of autocracy.
That’s a big change from 2024, when those uses of AI drew much comment at the hearing connected with the annual threat assessment. Brett Michael Holmgren, then-assistant Secretary of State for intelligence and research, said that “tools like generative AI will essentially lower the barrier for actors, state and non-state, with fewer resources to engage in potential election interference.” CIA Director William Burns said that threat actors in the Arabian Peninsula had “used AI to generate videos aimed at inspiring lone-wolf attacks as a result of the Gaza conflict as well.” And Avril Haines, the then-Director of National Intelligence, said, “Russia is deploying AI tools in the context of their influence efforts in Ukraine.”
Over the past two years, the Republican party and the Trump administration have dismantled efforts to prevent the spread of misinformation: pressing social-media companies to end moderation efforts, forcing universities to cease monitoring programs, and shuttering a key office at the Department of State.
But allied governments continue to mark the threat. Kaja Kallas, the European Union’s High Representative for Foreign Affairs and Security Policy and Vice-President of the European Commission, speaking on Tuesday at a conference in Belgium, noted: “AI has taken cognitive warfare to the next level, in the movie business and many other sectors, including our democratic space.”


