
Artificial intelligence designed to influence our decisions is everywhere—in Google searches, in online shopping suggestions and in movie streaming recommendations. But how does it affect decision-making in moments of crisis?
Virginia Commonwealth University researcher Christopher Whyte, Ph.D., investigated how emergency management and national security professionals responded during simulated AI attacks.
The results, published in the Journal of Homeland Security and Emergency Management, reveal that the professionals were more hesitant and doubtful of their abilities when faced with completely AI-driven threats than when confronted with threats from human hackers or hackers who were only assisted by AI.
“These results show that AI plays a major role in driving participants to become more hesitant, more cautious,” he said, “except under fairly narrow circumstances.”
Those narrow circumstances are most concerning to Whyte, an associate professor in VCU’s L. Douglas Wilder School of Government and Public Affairs.
National security organizations design their training programs to cut down on hesitancy in moments of uncertainty. While most of the almost 700 American and European professionals in the study thought AI could boost human abilities, a small group believed AI could eventually fully replace their profession, and human expertise in general. That group responded recklessly to the AI-based threat, accepting risks and rashly forging ahead.
“These are people that believe the totality of what they do—their professional mission and the institutional mission that they support—could be overtaken by AI,” Whyte said.
Artificial intelligence: The next ‘Great Machine’
Whyte has a theory for why that may be the case.
The discredited “Great Man” theory proposes that the course of history has mainly been shaped by strong political figures, while modern historians give more credit to popular movements. Now, Whyte proposes that history has also been shaped by transformative technological inventions, like the telegraph or radio waves, and misplaced faith in their power—what he has coined the “Great Machine” theory.
But unlike the “Great Man” theory, Whyte said, “Great Machines” are a shared, societal force that can be harnessed for society’s benefit—or for its detriment.
“In the mid-1930s, for instance, we knew that radio waves had a great amount of potential for a lot of things,” Whyte said. “But one of the early ideas was for death rays—you could fry your brain, and so on.”
Death rays caught on, inspiring both science fiction stories and real-life attempts to build them during World War I and the interwar period. It wasn’t until a few years before World War II that scientists began to build something practical with radio waves: radar.
Society currently faces the same problem with AI, Whyte said, which is what he calls a “general purpose” technology that could either help or hurt society. The technology has already dramatically changed how some people think about the world and their place in it.
“It does so many different things that you really do have this emergent area of replacement mentalities,” he said. “As in, the world of tomorrow will look completely different, and my place in it simply won’t exist because [AI] will fundamentally change everything.”
That line of thinking could pose problems for national security professionals as the new technology upends how they think about their own abilities and changes how they respond to emergency situations.
“That is the kind of psychological condition where we unfortunately end up having to throw out the rulebook on what we know is going to combat bias or uncertainty,” Whyte said.
Combating ‘Skynet’-level threats
To study how AI affects professionals’ decision-making abilities, Whyte recruited almost 700 emergency management and homeland security professionals from the United States, Germany, the United Kingdom and Slovenia to participate in a simulation game.
During the experiment, the professionals were faced with a typical national security threat: A foreign government interfering in an election in their country. They were then assigned to one of three scenarios: a control scenario, where the threat only involved human hackers; a scenario with light, “tactical” AI involvement, where hackers were assisted by AI; and a scenario with heavy levels of AI involvement, where participants were told that the threat was orchestrated by a “strategic” AI program.
When confronted with a strategic AI-based threat—what Whyte calls a “Skynet”-level threat, referencing the “Terminator” movie franchise—the professionals tended to doubt their training and were hesitant to act. They were also more likely to ask for additional intelligence information compared with their colleagues in the other two groups, who generally responded to the situation according to their training.
In contrast, the participants who thought about AI as a “Great Machine” that could completely replace them acted without restraint and made decisions that contradicted their training.
And while experience and education helped moderate how the professionals responded to the AI-assisted attacks, they didn’t affect how the professionals reacted to the “Skynet”-level threat. That could pose a threat as AI attacks become more common, Whyte said.
“People have variable views on whether AI is about augmentation, or whether it really is something that’s going to replace them,” Whyte said. “And that meaningfully changes how people will react in a crisis.”
More information:
Christopher Whyte, Artificial Intelligence and the “Great Machine” Problem: Avoiding Technology Oversimplification in Homeland Security and Emergency Management, Journal of Homeland Security and Emergency Management (2025). DOI: 10.1515/jhsem-2024-0030
Virginia Commonwealth University
Citation:
Belief in AI as a ‘Great Machine’ could weaken national security crisis responses, new research finds (2025, March 19)
retrieved 19 March 2025
from https://techxplore.com/news/2025-03-belief-ai-great-machine-weaken.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.