
Artificial intelligence (AI) is already in a position to decide to act, but the world dare not allow the human being to be marginalised when it comes to military decision making, according to Dr Robin Blake from the University of Pretoria’s Department of Political Science.
Speaking on day one of the Africa Aerospace and Defence (AAD) 2024 Conference on Future Warfare, organised by defenceWeb, Blake said there were applications where AI could be exceptionally useful. Using United States Air Force Colonel John Boyd’s OODA (Observe, Orient, Decide and Act) loop, devised by the fighter pilot commander during the Korean War to develop combat decision making, Blake said the ability of AI to collect data, process it and recognise patterns, potentially predicting what would happen, could speed up the process immensely.
Humans though, had to be involved in the decision of what to do and how to act, because AI could provide neither the intuition nor the emotional context that were so key in understanding the consequences of acts of war.
“Removing humans doesn’t address the contextual understanding of the complexity and unpredictability of the situation,” he said.
In critical situations, human intuition and experience were increasingly important in coming to the right decision, which at this stage AI could not assist with as it does not yet have the ability to understand emotion.
As such, decisions taken solely by AI could be subject to algorithmic and inherited bias, leading to the misinterpretation of data and the mis-identifying of targets. Another risk was data limitations in the field leading to reliable and real-time data being compromised and insufficient or wrong data being used by the AI model to reach conclusions.
AI, he said, was however well placed to assist with supporting decisions taken by humans, simulating scenarios in which decisions needed to be taken and in producing risk assessments.








