
Artificial intelligence (AI) plays a role in virtually every aspect of our lives, from self-driving cars to smart vacuum cleaners, to computer models that can predict the course of an epidemic. No matter how advanced these AI systems are, there always remains a certain degree of unpredictability about their behavior.
Thom Badings has developed a new method to include this uncertainty in predictive algorithms, so that a safe solution can be achieved. His Ph.D. defense takes place on 27 March at Radboud University.
When an AI model works well, everything seems to run effortlessly: the car reaches its destination, the drone flies without crashing, and the economic forecasts turn out to be completely correct. But in practice, systems controlled by AI run into numerous uncertainties. The drone must take birds and wind into account, and the self-driving car must be able to avoid people suddenly crossing the road and roadworks. So, how do you ensure that everything continues to run “effortlessly?”
Markov models
“That is why my colleagues and I developed methods to guarantee the accuracy and reliability of complex systems with high degrees of uncertainty,” explains Badings. “Many existing methods have difficulty dealing with this uncertainty. They require a lot of calculations, or they rely on restrictive assumptions, which means that the uncertainty is not properly taken into account. Our method creates a mathematical model of that uncertainty, for example, based on historical data, so that an accurate prediction can be made much faster.”
Badings’ method is based on modeling systems in the form of Markov models, an existing category of models often used in control engineering, AI and decision theory. “In a Markov model, we can explicitly include uncertainty in specific parameters, for example, for the wind speed or the weight of a drone. We then plug the model of the uncertainty, such as a probability distribution over these parameters, into the Markov model.
“Using techniques from control engineering and computer science, we can then prove whether this model behaves safely, despite the certainty in the model. This way, you can get an exact answer to the question of what the probability is that your drone will collide with an obstacle, without having to simulate each scenario separately.”
Embrace the uncertainty
“The ultimate goal is not to eliminate uncertainty, but to embrace it. You know that everything you do involves uncertainty, but by modeling it in this way, you make it part of your analysis. The results you get, therefore, robustly take that uncertainty into account in a way that is much more complete than with existing methods.”
Badings does warn about the limits of this approach: “If you have a situation with many parameters, it remains costly to include all the uncertainty. You can never completely eliminate that uncertainty, so you will still have to make certain assumptions to get meaningful results. Don’t assume that you can use one model to have your drone traverse every area in the world, but initially limit your model to the most likely environments.”
According to Badings, it is important to use techniques from different research areas when analyzing systems with AI. “Don’t get too hung up on the results you get from an AI model like ChatGPT, but use insights from control engineering, computer science, and artificial intelligence to arrive at a robust and safe solution.”
Radboud University
Citation:
This AI model is more certain about uncertainty (2025, March 20)
retrieved 20 March 2025
from https://techxplore.com/news/2025-03-ai-uncertainty.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.