• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

How sure is sure? Incorporating human error into machine learning

Simon Osuji by Simon Osuji
August 10, 2023
in Artificial Intelligence
0
How sure is sure? Incorporating human error into machine learning
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


uncertainty
Credit: Pixabay/CC0 Public Domain

Researchers are developing a way to incorporate one of the most human of characteristics—uncertainty—into machine learning systems.

Related posts

Banking AI in multiple business functions at NatWest

Banking AI in multiple business functions at NatWest

February 16, 2026
Saatva Memory Foam Hybrid Mattress Review: Going for Gold and Good Sleep

Saatva Memory Foam Hybrid Mattress Review: Going for Gold and Good Sleep

February 16, 2026

Human error and uncertainty are concepts that many artificial intelligence systems fail to grasp, particularly in systems where a human provides feedback to a machine learning model. Many of these systems are programmed to assume that humans are always certain and correct, but real-world decision-making includes occasional mistakes and uncertainty.

Researchers from the University of Cambridge, along with The Alan Turing Institute, Princeton, and Google DeepMind, have been attempting to bridge the gap between human behavior and machine learning, so that uncertainty can be more fully accounted for in AI applications where humans and machines are working together. This could help reduce risk and improve trust and reliability of these applications, especially where safety is critical, such as medical diagnosis.

The team adapted a well-known image classification dataset so that humans could provide feedback and indicate their level of uncertainty when labeling a particular image. The researchers found that training with uncertain labels can improve these systems’ performance in handling uncertain feedback, although humans also cause the overall performance of these hybrid systems to drop.

Their results will be reported at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society (AIES 2023) in Montréal.

‘Human-in-the-loop’ machine learning systems—a type of AI system that enables human feedback—are often framed as a promising way to reduce risks in settings where automated models cannot be relied upon to make decisions alone. But what if the humans are unsure?

“Uncertainty is central in how humans reason about the world but many AI models fail to take this into account,” said first author Katherine Collins from Cambridge’s Department of Engineering. “A lot of developers are working to address model uncertainty, but less work has been done on addressing uncertainty from the person’s point of view.”

We are constantly making decisions based on the balance of probabilities, often without really thinking about it. Most of the time—for example, if we wave at someone who looks just like a friend but turns out to be a total stranger—there’s no harm if we get things wrong. However, in certain applications, uncertainty comes with real safety risks.

“Many human-AI systems assume that humans are always certain of their decisions, which isn’t how humans work—we all make mistakes,” said Collins. “We wanted to look at what happens when people express uncertainty, which is especially important in safety-critical settings, like a clinician working with a medical AI system.”

“We need better tools to recalibrate these models, so that the people working with them are empowered to say when they’re uncertain,” said co-author Matthew Barker, who recently completed his MEng degree at Gonville and Caius College, Cambridge. “Although machines can be trained with complete confidence, humans often can’t provide this, and machine learning models struggle with that uncertainty.”

For their study, the researchers used some of the benchmark machine learning datasets: one was for digit classification, another for classifying chest X-rays, and one for classifying images of birds. For the first two datasets, the researchers simulated uncertainty, but for the bird dataset, they had human participants indicate how certain they were of the images they were looking at: whether a bird was red or orange, for example.

These annotated ‘soft labels’ provided by the human participants allowed the researchers to determine how the final output was changed. However, they found that performance degraded rapidly when machines were replaced with humans.

“We know from decades of behavioral research that humans are almost never 100% certain, but it’s a challenge to incorporate this into machine learning,” said Barker. “We’re trying to bridge the two fields, so that machine learning can start to deal with human uncertainty where humans are part of the system.”

The researchers say their results have identified several open challenges when incorporating humans into machine learning models. They are releasing their datasets so that further research can be carried out and uncertainty might be built into machine learning systems.

“As some of our colleagues so brilliantly put it, uncertainty is a form of transparency, and that’s hugely important,” said Collins. “We need to figure out when we can trust a model and when to trust a human and why. In certain applications, we’re looking at a probability over possibilities. Especially with the rise of chatbots for example, we need models that better incorporate the language of possibility, which may lead to a more natural, safe experience.”

“In some ways, this work raised more questions than it answered,” said Barker. “But even though humans may be mis-calibrated in their uncertainty, we can improve the trustworthiness and reliability of these human-in-the-loop systems by accounting for human behavior.”

Provided by
University of Cambridge

Citation:
How sure is sure? Incorporating human error into machine learning (2023, August 9)
retrieved 9 August 2023
from https://techxplore.com/news/2023-08-incorporating-human-error-machine.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

UFO Whistleblower Kept Security Clearance After Psychiatric Detention

Next Post

Guggenheim Museum staff ratifies its first union contract

Next Post
Guggenheim Museum staff ratifies its first union contract

Guggenheim Museum staff ratifies its first union contract

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Bitcoin mining difficulty hits all-time high after surging over 6%

Bitcoin mining difficulty hits all-time high after surging over 6%

2 years ago
Taiwan continues to provide useful assistance to St Lucia, says PM Pierre

Taiwan continues to provide useful assistance to St Lucia, says PM Pierre

2 years ago
What does ‘time immemorial’ really mean?

What does ‘time immemorial’ really mean?

1 month ago
University’s plan to fund dormitory renovations by selling art worth $10m, including O’Keeffe landscape painting, clears legal hurdle

University’s plan to fund dormitory renovations by selling art worth $10m, including O’Keeffe landscape painting, clears legal hurdle

2 years ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.