Wednesday, June 4, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

Large language models pose risk to science with false answers, says study

Simon Osuji by Simon Osuji
November 21, 2023
in Artificial Intelligence
0
Large language models pose risk to science with false answers, says study
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Large Language Models pose risk to science with false answers, says study
Two hypothetical use cases for LLMs based on real prompts and responses demonstrate the effect of inaccurate responses on user beliefs. Credit: Nature Human Behaviour (2023). DOI:10.1038/s41562-023-01744-0

Large Language Models (LLMs) pose a direct threat to science because of so-called “hallucinations” (untruthful responses), and should be restricted to protect scientific truth, says a new paper from leading Artificial Intelligence researchers at the Oxford Internet Institute.

Related posts

Luxus Couples Vibrator Review: Magnetic Attraction

Luxus Couples Vibrator Review: Magnetic Attraction

June 4, 2025
AI deployemnt security and governance, with Deloitte

AI deployemnt security and governance, with Deloitte

June 4, 2025

The paper by Professors Brent Mittelstadt, Chris Russell, and Sandra Wachter has been published in Nature Human Behaviour. It explains, “LLMs are designed to produce helpful and convincing responses without any overriding guarantees regarding their accuracy or alignment with fact.”

One reason for this is the data the technology uses to answer questions does not always come from a factually correct source. LLMs are trained on large datasets of text, usually taken from online sources. These can contain false statements, opinions, and creative writing among other types of non-factual information.

Professor Mittelstadt explains, “People using LLMs often anthropomorphize the technology, where they trust it as a human-like information source. This is, in part, due to the design of LLMs as helpful, human-sounding agents that converse with users and answer seemingly any question with confident-sounding, well-written text. The result of this is that users can easily be convinced that responses are accurate even when they have no basis in fact or present a biased or partial version of the truth.”

To protect science and education from the spread of bad and biased information, the authors argue, clear expectations should be set around what LLMs can responsibly and helpfully contribute. According to the paper, “For tasks where the truth matters, we encourage users to write translation prompts that include vetted, factual Information.”

Professor Wachter says, “The way in which LLMs are used matters. In the scientific community, it is vital that we have confidence in factual information, so it is important to use LLMs responsibly. If LLMs are used to generate and disseminate scientific articles, serious harms could result.”

Professor Russell adds, “It’s important to take a step back from the opportunities LLMs offer and consider whether we want to give those opportunities to a technology just because we can.”

LLMs are currently treated as knowledge bases and used to generate information in response to questions. This makes the user vulnerable both to regurgitated false information that was present in the training data and to “hallucinations”—false information spontaneously generated by the LLM that was not present in the training data.

To overcome this, the authors argue, LLMs should instead be used as “zero-shot translators.” Rather than relying on the LLM as a source of relevant information, the user should simply provide the LLM with appropriate information and ask it to transform it into a desired output. For example, rewriting bullet points as a conclusion or generating code to transform scientific data into a graph.

Using LLMs in this way makes it easier to check that the output is factually correct and consistent with the provided input.

The authors acknowledge that the technology will undoubtedly assist with scientific workflows but are clear that scrutiny of its outputs is key to protecting robust science.

“To protect science we must use LLMs as zero-shot translators,” lead author Director of Research, Associate Professor and Senior Research Fellow, Dr. Brent Mittelstadt, Oxford Internet Institute.

More information:
Mittelstadt, B. et al, To protect science, we must use LLMs as zero-shot translators. Nature Human Behaviour (2023). DOI: 10.1038/s41562-023-01744-0. www.nature.com/articles/s41562-023-01744-0

Provided by
University of Oxford

Citation:
Large language models pose risk to science with false answers, says study (2023, November 20)
retrieved 21 November 2023
from https://techxplore.com/news/2023-11-large-language-pose-science-false.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Indy raises $44 million to simplify taxes and paperwork for freelancers

Next Post

Komatsu announces plans to acquire American Battery Solutions

Next Post
Komatsu announces plans to acquire American Battery Solutions

Komatsu announces plans to acquire American Battery Solutions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Parliamentary debate to seek accountability on DRC tragedy

Parliamentary debate to seek accountability on DRC tragedy

4 months ago
How to Put Your Driver’s License on Your Phone

How to Put Your Driver’s License on Your Phone

2 years ago
Abu Dhabi’s Masdar concludes 100% acquisition of Terna Energy

Abu Dhabi’s Masdar concludes 100% acquisition of Terna Energy

2 months ago
3rd Technical and Steering Committee Meetings of Aquatic Biodiversity Project to be held in Mombasa

3rd Technical and Steering Committee Meetings of Aquatic Biodiversity Project to be held in Mombasa

8 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.