Monday, May 19, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

A simple solution may not be feasible

Simon Osuji by Simon Osuji
March 14, 2024
in Artificial Intelligence
0
A simple solution may not be feasible
0
SHARES
2
VIEWS
Share on FacebookShare on Twitter


news
Credit: Unsplash/CC0 Public Domain

With misinformation and disinformation proliferating online, many may wish for a simple, reliable, automated “fake news” detection system to easily identify falsehoods from truths. Often with the help of machine learning, many scientists have developed such tools, but experts advise caution when deploying them.

Related posts

Social robots learning without us? New study cuts humans from early testing

Social robots learning without us? New study cuts humans from early testing

May 19, 2025
21 Best High School Graduation Gifts (2025)

21 Best High School Graduation Gifts (2025)

May 18, 2025

In new research, Rensselaer Polytechnic Institute’s Dorit Nevo, Ph.D., professor in the Lally School of Management, and colleagues explored the mistakes that these detection tools make. They found that bias and generalizability are challenges because of the models’ training and design, along with the unpredictability of news content. The challenges give rise to ethical concerns.

Nevo was joined in research by Benjamin D. Horne, Ph.D., assistant professor in Data Science and Engineering at the School of Information Sciences at the University of Tennessee, and Susan L. Smith, Ph.D., senior lecturer in Cognitive Science at Rensselaer.

The work is published in the journal Behaviour & Information Technology.

“Models are ranked on performance metrics and only research on the best performing model is published,” say the authors. “This format sacrifices empirical rigor and does not take into account the deployment context.” For example, a model may deem one source as reliable, or true, when the source may in fact publish a mix of true and false news, depending on the topic.

On top of that, a set of labels referred to as ground truth is used to train and evaluate the models, and the people generating the labels may be uncertain themselves whether a news item is real or fake.

Together, these elements may perpetuate biases.

“One consumer may view content as biased that another may think is true,” said Nevo. “Similarly, one model may flag content as unreliable, and another will not. A developer may consider one model the best, but another developer may disagree. We think a clear understanding of these issues must be attained before a model may be considered trustworthy.”

The research team analyzed 140,000 news articles from one month in 2021 and examined the issues that arise from automated content moderation. They came to three main conclusions. First, who chooses the ground truth matters. Second, operationalizing tasks for automation may perpetuate bias. Third, ignoring or simplifying the application context reduces research validity.

“It is critical to employ diverse developers when determining ground truth,” said Horne. “Not only should we employ programmers and data analysts in the task, but also experts in other fields as well as members of the general public.”

Smith adds, “Models have far-reaching societal, economic, and ethical implications that cannot be understood by a single field alone.”

Further, the model must be continually reevaluated. Over time, models may fail to perform as predicted and the ground truth may become uncertain. As anomalies increase, experts must explore new approaches for establishing ground truth. Similarly, the methods for establishing ground truth will evolve as science advances, and so must our models.

Finally, we must understand the severe implications that inaccurate fake news detection would have and consider that a single model may never pose a one size fits all solution. Perhaps media literacy combined with a model’s suggestions would offer the most reliability, or a model should be applied to only one news topic as opposed to everything.

“By combining weak, limited solutions, we may be able to create strong, robust, fair, and safe solutions,” the researchers conclude.

“At this point in history, with the rampant spread of misinformation and the polarization of society, stakes could not be higher for developing accurate tools to detect fake news. Clearly, we must proceed with caution, inclusiveness, thoughtfulness, and transparency,” said Chanaka Edirisinghe, Ph.D., acting dean of Rensselaer’s Lally School of Management.

More information:
Benjamin D. Horne et al, Ethical and safety considerations in automated fake news detection, Behaviour & Information Technology (2023). DOI: 10.1080/0144929X.2023.2285949

Provided by
Rensselaer Polytechnic Institute

Citation:
Automated fake news detection: A simple solution may not be feasible (2024, March 14)
retrieved 14 March 2024
from https://techxplore.com/news/2024-03-automated-fake-news-simple-solution.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Army orders another $0.75 billion worth of Armored Multi-Purpose Vehicles

Next Post

How to Fly for Free? Have a Parent Who Works for the Airline

Next Post
How to Fly for Free? Have a Parent Who Works for the Airline

How to Fly for Free? Have a Parent Who Works for the Airline

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

How to turn off those silly video calls reactions on iPhone and Mac

How to turn off those silly video calls reactions on iPhone and Mac

11 months ago
Twitter Is Finally Dead. It’s X All the Way Down

Twitter Is Finally Dead. It’s X All the Way Down

1 year ago
Richard Serra remembered and an Expressionist art special

Richard Serra remembered and an Expressionist art special

1 year ago
The Hidden Ties Between Google and Amazon’s Project Nimbus and Israel’s Military

The Hidden Ties Between Google and Amazon’s Project Nimbus and Israel’s Military

10 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.