Tiktok Youtube Telegram Instagram Linkedin X-twitter
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist
  • Fashion Intelligence
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist
  • Fashion Intelligence

Showing AI users diversity in training data can boost perceived fairness and trust

Simon Osuji by Simon Osuji
October 21, 2024
in Artificial Intelligence
0
Showing AI users diversity in training data can boost perceived fairness and trust
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter


AI data
Credit: Pixabay/CC0 Public Domain

While artificial intelligence (AI) systems, such as home assistants, search engines or large language models like ChatGPT, may seem nearly omniscient, their outputs are only as good as the data on which they are trained. However, ease of use often leads users to adopt AI systems without understanding what training data was used or who prepared the data, including potential biases in the data or held by trainers.

Related posts

Opposed to Data Centers? The Working Families Party Wants You to Run for Office

Opposed to Data Centers? The Working Families Party Wants You to Run for Office

December 19, 2025
Terrifying New Photos Emerge From the Jeffrey Epstein Estate

Terrifying New Photos Emerge From the Jeffrey Epstein Estate

December 19, 2025

A new study by Penn State researchers suggests that making this information available could shape appropriate expectations of AI systems and further help users make more informed decisions about whether and how to use these systems.

The work investigated whether displaying racial diversity cues—the visual signals on AI interfaces that communicate the racial composition of the training data and the backgrounds of the typically crowd-sourced workers who labeled it—can enhance users’ expectations of algorithmic fairness and trust. Their findings were recently published in the journal Human-Computer Interaction.

AI training data is often systematically biased in terms of race, gender and other characteristics, according to S. Shyam Sundar, Evan Pugh University Professor and director of the Center for Socially Responsible Artificial Intelligence at Penn State.

“Users may not realize that they could be perpetuating biased human decision-making by using certain AI systems,” he said.

Lead author Cheng “Chris” Chen, assistant professor of communication design at Elon University, who earned her doctorate in mass communications from Penn State, explained that users are often unable to evaluate biases embedded in the AI systems because they don’t have information about the training data or the trainers.

“This bias presents itself after the user has completed their task, meaning the harm has already been inflicted, so users don’t have enough information to decide if they trust the AI before they use it,” Chen said

Sundar said that one solution would be to communicate the nature of the training data, especially its racial composition.

“This is what we did in this experimental study, with the goal of finding out if it would make any difference to their perceptions of the system,” Sundar said.

To understand how diversity cues can impact trust in AI systems, the researchers created two experimental conditions, one diverse and one non-diverse. In the former, participants viewed a short description of the machine learning model and data labeling practice, along with a bar chart showing an equal distribution of facial images in the training data from three racial groups: white, Black and Asian, each making up about one-third of the dataset.

In the condition without racial diversity, the bar chart showed that 92% of the images belonged to a single dominant racial group. Similarly, for labelers’ backgrounds, balanced representation was maintained with roughly one-third each of white, Black and Asian labelers. The non-diverse condition showed a bar chart conveying that 92% of labelers were from a single racial group.

Participants first reviewed data cards that showed training data characteristics of an AI-powered facial expression classification AI tool called HireMe. They then watched automated interviews of three equally qualified male candidates of different races. The candidates’ neutral facial expressions and tone were analyzed in real time by the AI system and presented to participants, highlighting the most prominent expression and each candidate’s employability.

Showing AI users diversity in training data boosts perceived fairness and trust
Participants viewed info on the races of the faces featured in the training data and the races of the people who labeled it. Credit: Creative Commons

Half the participants were exposed to racially biased performance by the system, in that it was manipulated by the experimenters to favor the white candidate, rating his neutral expression as joyful and suitable for the job, while interpreting the Black and Asian candidates’ expressions as anger and fear, respectively.

In the unbiased condition, the AI identified joy as each candidate’s prominent expression and equally noting them as good fits for the position. Participants were then asked to provide feedback on the AI’s analysis, rating their agreement on a five-point scale and selecting the most appropriate emotion if they disagreed.

“We found that showing racial diversity in training data and labelers’ backgrounds increased users’ trust in the AI,” Chen said. “The opportunity to provide feedback also helped participants develop a higher sense of agency and increased their potential to use the AI system in the future.”

However, the researchers noted that providing feedback about an unbiased system reduced usability for white participants. Because their perception was the the system was already functioning correctly and fairly, they saw little need to provide feedback and viewed it as an unnecessary burden.

The researchers found that, when multiple racial diversity cues were present, they work independently, but both data diversity and labeler diversity cues are effective in shaping users’ perception of the system’s fairness. The researchers emphasized the idea of the representativeness heuristic, meaning users tended to believe that the training of the AI model is racially inclusive if its racial composition matches their understanding of diversity.

“If AI is just learning expressions labeled mostly by people of one race, the system may misrepresent emotions of other races,” said Sundar, who is also the James P. Jimirro Professor of Media Effects at the Penn State Bellisario College of Communications and co-director of the Media Effects Research Laboratory.

“The system needs to take race into account when deciding if a face is cheerful or angry, for example, and that comes in the form of greater racial diversity of both images and labelers in the training process.”

According to the researchers, for an AI system to be credible, the origin of its training data must be made available, so users can review and scrutinize it to determine their level of trust.

“Making this information accessible promotes transparency and accountability of AI systems,” Sundar said. “Even if users don’t access this information, its availability signals ethical practice, and fosters fairness and trust in these systems.”

More information:
Cheng Chen et al, Communicating and combating algorithmic bias: effects of data diversity, labeler diversity, performance bias, and user feedback on AI trust, Human–Computer Interaction (2024). DOI: 10.1080/07370024.2024.2392494

Provided by
Pennsylvania State University

Citation:
Showing AI users diversity in training data can boost perceived fairness and trust (2024, October 21)
retrieved 21 October 2024
from https://techxplore.com/news/2024-10-ai-users-diversity-boost-fairness.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

The D Brief: Russia increases drone strikes; US troops attacked in Syria; Missing aviators declared dead; Trump’s radical view; And a bit more.

Next Post

VanEck enables staking rewards for European Solana ETN investors

Next Post
VanEck enables staking rewards for European Solana ETN investors

VanEck enables staking rewards for European Solana ETN investors

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Union asks TotalEnergies, others to cut gas supply to Dangote refinery amid mass layoff

Union asks TotalEnergies, others to cut gas supply to Dangote refinery amid mass layoff

3 months ago
How does age influence the UK energy transition?

How does age influence the UK energy transition?

1 year ago
Gregg Berhalter set to return as USMNT coach: Sources

Gregg Berhalter set to return as USMNT coach: Sources

3 years ago
Pfizer, BioNTech showcase new data supporting COVID booster

Pfizer, BioNTech showcase new data supporting COVID booster

3 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form
© 2023 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.