• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

Fairness tool catches AI bias early

Simon Osuji by Simon Osuji
August 21, 2025
in Artificial Intelligence
0
Fairness tool catches AI bias early
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter


bank
Credit: Unsplash/CC0 Public Domain

Machine learning software helps agencies make important decisions, such as who gets a bank loan or what areas police should patrol. But if these systems have biases, even small ones, they can cause real harm. A specific group of people could be underrepresented in a training dataset, for example, and as the machine learning (ML) model learns that bias can multiply and lead to unfair outcomes, such as loan denials or higher risk scores in prescription management systems.

Related posts

Razer Huntsman V3 Pro 8KHz Review: A Keyboard for the Competitive

Razer Huntsman V3 Pro 8KHz Review: A Keyboard for the Competitive

February 11, 2026
Barclays bets on AI to cut costs and boost returns

Barclays bets on AI to cut costs and boost returns

February 11, 2026

Researchers at Carnegie Mellon University’s School of Computer Science (SCS) created FairSense to help developers address unfairness in ML systems before the harm occurs. Currently, most fairness checks look at a system at a specific point in time, but ML models learn, adapt and change. FairSense simulates these systems in their environments over long periods to time to measure unfairness.

“The key is to think about feedback loops,” said Christian Kästner, an associate professor in the Software and Societal Systems Department (S3D). “You might have a tiny bias in the model, like a small discrimination against a gender or race. When it’s deployed, the model produces an effect in the real world. It discriminates against people—they get fewer opportunities, less money or end up in jail more often. And then you train the system on the data influenced by that model, which might amplify the bias over time.

“So it might be small in the beginning, but because it has an effect in the real world and then the model learns from that again, it could become a vicious cycle where the bias grows.”

In “FairSense: Long-Term Fairness Analysis of ML-Enabled Systems,” SCS researchers explored how fairness changes as these ML systems are used over time. They focused on testing these systems in a dynamic environment rather than a static state.

To use FairSense, developers provide information about the machine learning system, a model of the environment it will be used in and the metric that indicates fairness. For example, in a bank, the system could be software that predicts applicants’ creditworthiness and makes loan decisions. The environment model includes relevant information from the applicant’s credit history and how credit scores might be affected, and the fairness metric could be the parity between different groups of people approved for loans.

Along with Kästner, the team included S3D’s Yining She, a doctoral student, and Eunsuk Kang, an associate professor. Sumon Biswas from Case Western Reserve University also participated in the research, which the team presented earlier this year at the International Conference on Software Engineering.

“We simulate how the fairness might change over a long period of time after the system is deployed,” She said. “If we observe an increase in unfairness over time, the next step is identifying the core factors affecting this fairness so the developer can address these issues proactively.”

Since ML-enabled systems are deployed in varied and complex situations that aren’t always predictable, FairSense can capture and simulate that uncertainty in the environment model. In loan lending, for example, credit score updates and new loan applicants are uncontrollable and could affect how the system behaves over time. FairSense’s simulation generates a wide range of possible scenarios based on these variables and allows developers to identify factors, such as credit score thresholds or other parameters, that could have the most significant impact on long-term fairness issues.

“A lot of the software we build can negatively affect people,” Kang said. “The systems we build have societal impact. The people who build these systems should be thinking about the issues that may arise over time, not just right now.

“When you build and deploy the system, what potential bad things could happen down the road? I hope that reading papers like this one will encourage software developers to think more broadly about potential harms caused by the systems they create and proactively address these types of issues before they are deployed into the real world.”

Researchers plan to expand FairSense’s work to continuously monitor the fairness of ML systems and to develop a tool to explain how these systems can become unfair.

More information:
Yining She et al, FairSense: Long-Term Fairness Analysis of ML-Enabled Systems, 2025 IEEE/ACM 47th International Conference on Software Engineering (ICSE) (2025). DOI: 10.1109/ICSE55347.2025.00159

Provided by
Carnegie Mellon University

Citation:
Fairness tool catches AI bias early (2025, August 21)
retrieved 21 August 2025
from https://techxplore.com/news/2025-08-fairness-tool-ai-bias-early.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

InvestmentNews talks fiduciary standard with Knut Rostad

Next Post

Trump Rollbacks Put Mental Health Coverage at Risk Nationwide

Next Post
Trump Rollbacks Put Mental Health Coverage at Risk Nationwide

Trump Rollbacks Put Mental Health Coverage at Risk Nationwide

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Appili Therapeutics Reports Fiscal Year 2023 Financial and Operational Results

3 years ago
Russia Preparing to Launch Digital Ruble in 2025?

Russia Preparing to Launch Digital Ruble in 2025?

3 years ago
SEC adopts climate disclosure rules, giving carbon accounting startups firm footing

SEC adopts climate disclosure rules, giving carbon accounting startups firm footing

2 years ago
Actis agrees sale of South African fibre operator Octotel

Actis agrees sale of South African fibre operator Octotel

2 years ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.