• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

Team teaches AI models to spot misleading scientific reporting

Simon Osuji by Simon Osuji
May 29, 2025
in Artificial Intelligence
0
Team teaches AI models to spot misleading scientific reporting
0
SHARES
4
VIEWS
Share on FacebookShare on Twitter


Team teaches AI models to spot misleading scientific reporting
The dataset construction process: 1) utilizing publicly available datasets as well as web resources to collect human-written scientific news related to COVID-19 (Subsection), 2) selecting abstracts from CORD-19 as resources to guide LLMs to generate articles using jailbreak prompt (Subsection ), 3) the dataset is augmented with evidence corpus drawn from CORD-19 (Subsection). Credit: https://openreview.net/pdf/17a3c9632a6f71e59171f7a8f245c9dce44cf559.pdf

Artificial intelligence isn’t always a reliable source of information: large language models (LLMs) like Llama and ChatGPT can be prone to “hallucinating” and inventing bogus facts. But what if AI could be used to detect mistaken or distorted claims, and help people find their way more confidently through a sea of potential distortions online and elsewhere?

Related posts

Elehear Delight Hearing Aids Review: Good Fit, Poor Sound

Elehear Delight Hearing Aids Review: Good Fit, Poor Sound

February 14, 2026
How Ilia Malinin Revolutionized Figure Skating with His Quadruple Axel

How Ilia Malinin Revolutionized Figure Skating with His Quadruple Axel

February 14, 2026

As presented at a workshop at the annual conference of the Association for the Advancement of Artificial Intelligence, researchers at Stevens Institute of Technology present an AI architecture designed to do just that, using open-source LLMs and free versions of commercial LLMs to identify potential misleading narratives in news reports on scientific discoveries.

“Inaccurate information is a big deal, especially when it comes to scientific content—we hear all the time from doctors who worry about their patients reading things online that aren’t accurate, for instance,” said K.P. Subbalakshmi, the paper’s co-author and a professor in the Department of Electrical and Computer Engineering at Stevens.

“We wanted to automate the process of flagging misleading claims and use AI to give people a better understanding of the underlying facts.”

To achieve that, the team of two Ph.D. students and two Master’s students led by Subbalakshmi first created a dataset of 2,400 news reports on scientific breakthroughs.

The dataset included both human-generated reports, drawn either from reputable science journals or low-quality sources known to publish fake news, and AI-generated reports, of which half were reliable and half contained inaccuracies.

Each report was then paired with original research abstracts related to the technical topic, enabling the team to check each report for scientific accuracy. Their work is the first attempt at systematically directing LLMs to detect inaccuracies in science reporting in public media, according to Subbalakshmi.

“Creating this dataset is an important contribution in its own right, since most existing datasets typically do not include information that can be used to test systems developed to detect inaccuracies ‘in the wild,'” Dr. Subbalakshmi said. “These are difficult topics to investigate, so we hope this will be a useful resource for other researchers.”

Next, the team created three LLM-based architectures to guide an LLM through the process of determining a news report’s accuracy. One of these architectures is a three-step process. First, the AI model summarized each news report and identified the salient features.

Next, it conducted sentence-level comparisons between claims made in the summary and evidence contained in the original peer-reviewed research. Finally, the LLM made a determination as to whether the report accurately reflected the original research.

The team also defined “dimensions of validity” and asked the LLM to think about these five “dimensions of validity”—specific mistakes, such as oversimplification or confusing causation and correlation, commonly present in inaccurate news reports.

“We found that asking the LLM to use these dimensions of validity made quite a big difference to the overall accuracy,” Dr. Subbalakshmi said and added that these dimensions of validity can be expanded upon, to better capture domain-specific inaccuracies, if needed.

Using the new dataset, the team’s LLM pipelines were able to correctly distinguish between reliable and unreliable news reports with about 75% accuracy—but proved markedly better at identifying inaccuracies in human-generated content than in AI-generated reports. The reasons for that aren’t yet clear, although Dr. Subbalakshmi notes that non-expert humans similarly struggle to identify technical errors in AI-generated text.

“There’s certainly room for improvement in our architecture,” Dr. Subbalakshmi says. “The next step might be to create custom AI models for specific research topics, so they can ‘think’ more like human scientists.”

In the long run, the team’s research could open the door to browser plugins that automatically flag inaccurate content as people use the Internet, or to rankings of publishers based on how accurately they cover scientific discoveries.

Perhaps most importantly, Dr. Subbalakshmi says, the research could also enable the creation of LLM models that describe scientific information more accurately, and that are less prone to confabulating when describing scientific research.

“Artificial intelligence is here—we can’t put the genie back in the bottle,” Dr. Subbalakshmi said. “But by studying how AI ‘thinks’ about science, we can start to build more reliable tools—and perhaps help humans to spot unscientific claims more easily, too.”

More information:
Yupeng Cao et al, CoSMis: A Hybrid Human-LLM COVID Related Scientific Misinformation Dataset and LLM pipelines for Detecting Scientific Misinformation in the Wild. openreview.net/pdf/17a3c9632a6 … f245c9dce44cf559.pdf

Provided by
Stevens Institute of Technology

Citation:
Team teaches AI models to spot misleading scientific reporting (2025, May 29)
retrieved 29 May 2025
from https://techxplore.com/news/2025-05-team-ai-scientific.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Inside Rand Water’s 2025 maintenance plan

Next Post

The medtech IPO window is finally open. Or is it?

Next Post
The medtech IPO window is finally open. Or is it?

The medtech IPO window is finally open. Or is it?

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Bitcoin Remains a ‘Store of Value’ During Inflation Concerns

Bitcoin Might Rise by $10,000 in a Day: Predicts Analyst

2 years ago
Kenya Investigates U.K. Soldiers’ Crimes

Kenya Investigates U.K. Soldiers’ Crimes

2 years ago
The D Brief: What’s next for Syria?; Defense-policy bill unveiled; New counter-UAV strategy; Some GOPers unsold on Hegseth; And a bit more.

The D Brief: What’s next for Syria?; Defense-policy bill unveiled; New counter-UAV strategy; Some GOPers unsold on Hegseth; And a bit more.

1 year ago
Poland’s LipCo Foods gears up for Middle East expansion

Poland’s LipCo Foods gears up for Middle East expansion

12 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.