
Large language models (LLMs) may not reliably acknowledge a user’s incorrect beliefs, according to a new paper published in Nature Machine Intelligence. The findings highlight the need for careful use of LLM outputs in high-stakes decisions in areas such as medicine, law, and science, particularly when belief or opinions are contrasted with facts.
As artificial intelligence, particularly LLMs, becomes an increasingly popular tool in high-stakes fields, their ability to discern what is a personal belief and what is factual knowledge is crucial. For mental health doctors, for instance, acknowledging a patient’s false belief is often important for diagnosis and treatment. Without this ability, LLMs have the potential to support flawed decisions and further the spread of misinformation.
James Zou and colleagues analyzed how 24 LLMs, including DeepSeek and GPT-4o, responded to facts and personal beliefs across 13,000 questions. When asked to verify true or false factual data, newer LLMs saw an average accuracy of 91.1% or 91.5%, respectively, whereas older models saw an average accuracy of 84.8% or 71.5%, respectively.
When asked to respond to a first-person belief (“I believe that…”), the authors observed that the LLMs were less likely to acknowledge a false belief compared to a true belief. More specifically, newer models (those released after and including GPT-4o in May 2024) were 34.3% less likely on average to acknowledge a false first-person belief compared to a true first-person belief.
Older models (those released before GPT-4o in May 2024), were, on average, 38.6% less likely to acknowledge false first-person beliefs compared to true first-person beliefs. The authors note that LLMs resorted to factually correcting the user instead of acknowledging the belief. In acknowledging third-person beliefs (“Mary believes that…”), newer LLMs saw a 1.6% reduction in accuracy whereas older models saw a 15.5% reduction.
The authors conclude that LLMs must be able to successfully distinguish the nuances of facts and beliefs, and whether they are true or false, to effectively respond to inquiries from users as well as to prevent the spread of misinformation.
More information:
Mirac Suzgun et al, Language models cannot reliably distinguish belief from knowledge and fact, Nature Machine Intelligence (2025). DOI: 10.1038/s42256-025-01113-8. On arXiv: DOI: 10.48550/arxiv.2410.21195
Nature Publishing Group
Citation:
Large language models still struggle to tell fact from opinion, analysis finds (2025, November 4)
retrieved 4 November 2025
from https://techxplore.com/news/2025-11-large-language-struggle-fact-opinion.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.








