• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

A method to mitigate hallucinations in large language models

Simon Osuji by Simon Osuji
May 22, 2024
in Artificial Intelligence
0
A method to mitigate hallucinations in large language models
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


A method to mitigate hallucinations in large language models
Abstention rates vs. average test losses on the Temporal Sequences dataset with α = 0.05 (top) and α = 0.05 (bottom) for score functions match count (m.c.), expected match count (e.m.c), and the log-probability (l.p.), and for various calibration methods (. denotes the baseline with no calibration). Box widths and heights represent 90% confidence intervals with Gaussian approximation over abstention rates and average test errors, respectively. The dashed horizontal line represents the target risk bound α. Credit: arXiv (2024). DOI: 10.48550/arxiv.2405.01563

Large language models (LLMs), artificial neural networks-based architectures that can process, generate and manipulate texts in various human languages, have recently become increasingly widespread. These models are now being used in a wide range of settings, to rapidly find answers to queries, produce content for specific purposes and interpret complex texts.

Related posts

Google’s Industrial Robotics AI Play Is Now a Physical AI Priority

Google’s Industrial Robotics AI Play Is Now a Physical AI Priority

March 4, 2026
These Official ChromeOS Flex USB Sticks Can Give Your Old Mac or Windows PC a Second Life

These Official ChromeOS Flex USB Sticks Can Give Your Old Mac or Windows PC a Second Life

March 4, 2026

While recently introduced LLMs can generate highly convincing texts, which are in some cases difficult to discern from writings produced by humans, they have been found to be prone to so-called hallucinations. In this context, hallucinations refer to an LLM generating entirely incoherent, inaccurate or inappropriate responses.

Researchers at DeepMind recently developed a new procedure that could help to identify instances in which LLM should refrain from responding to a query, for instance replying “I don’t know,” as they are likely to hallucinate non-sensical or incorrect answers. The team’s proposed approach, outlined in a paper pre-published on arXiv, entails the use of LLMs to evaluate their own potential responses.

“Building on earlier approaches that use self-consistency as a more reliable measure of model confidence, we propose using the LLM itself to self-evaluate the similarity between each of its sampled responses for a given query,” Yasin Abbasi Yadkori, Ilja Kuzborskij and their colleagues wrote in their paper. “We then further leverage conformal prediction techniques to develop an abstention procedure that benefits from rigorous theoretical guarantees on the hallucination rate (error rate).”

Yadkori, Kuzborskij and their colleagues evaluated their proposed method to mitigate LLM hallucinations in a series of experiments, using Temporal Sequences and TriviaQA, two publicly available datasets containing queries and associated responses. They specifically applied their proposed method to Gemini Pro, an LLM developed at Google and released in 2023.

“Experimentally, our resulting conformal abstention method reliably bounds the hallucination rate on various closed-book, open-domain generative question answering datasets, while also maintaining a significantly less conservative abstention rate on a dataset with long responses (Temporal Sequences) compared to baselines using log-probability scores to quantify uncertainty, while achieving comparable performance on a dataset with short answers (TriviaQA),” the researchers wrote.

“To evaluate the experiments automatically, one needs to determine if two responses are equivalent given a question. Following standard practice, we use a thresholded similarity function to determine if two responses match, but also provide a method for calibrating the threshold based on conformal prediction, with theoretical guarantees on the accuracy of the match prediction, which might be of independent interest.”

The results of this research team’s experiments suggest that their conformal calibration and similarity scoring procedure does mitigate LLM hallucinations, allowing a model to abstain from answering a question if their answer is likely to be non-sensical or untrustworthy. The newly proposed approach was found to outperform simple baseline scoring procedures.

This recent study by Deep Mind could soon inform the development of similar procedures to improve the reliability of LLMs and prevent them from hallucinating. Collectively, these efforts will contribute to the advancement of these models, facilitating their widespread use among professionals worldwide.

More information:
Yasin Abbasi Yadkori et al, Mitigating LLM Hallucinations via Conformal Abstention, arXiv (2024). DOI: 10.48550/arxiv.2405.01563

Journal information:
arXiv

© 2024 Science X Network

Citation:
A method to mitigate hallucinations in large language models (2024, May 22)
retrieved 22 May 2024
from https://techxplore.com/news/2024-05-method-mitigate-hallucinations-large-language.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

RWE installs first turbine foundation for Sofia Offshore Wind Farm

Next Post

SANDF should withdraw from foreign peace missions to focus on security at home, DA maintains

Next Post
SANDF should withdraw from foreign peace missions to focus on security at home, DA maintains

SANDF should withdraw from foreign peace missions to focus on security at home, DA maintains

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

A Well-Connected NYU Parent Is Trying to Get Students Deported

A Well-Connected NYU Parent Is Trying to Get Students Deported

1 year ago
NCC approves MTN Nigeria spectrum deal with T2 Mobile

NCC approves MTN Nigeria spectrum deal with T2 Mobile

6 months ago
Book Chat with Rev. Smith, Juliet Hooker, and M Ann Machen

Book Chat with Rev. Smith, Juliet Hooker, and M Ann Machen

2 years ago
Elon Musk’s internet business seems to be struggling in one of Africa’s richest markets

Elon Musk’s internet business seems to be struggling in one of Africa’s richest markets

7 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • Mahama attends Liberia’s 178th independence anniversary

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.