• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

Study finds bias in language models against non-binary users

Simon Osuji by Simon Osuji
November 14, 2024
in Artificial Intelligence
0
Study finds bias in language models against non-binary users
0
SHARES
2
VIEWS
Share on FacebookShare on Twitter


Technology meant to protect marginalized voices sometimes silences them
Three prompting schemas vanilla, identity and identity-cot that are used to elicit toxicity scores from our models. Each schema introduces an additional aspect of context to the model. Bold fields include examples. Credit: arXiv (2024). DOI: 10.48550/arxiv.2406.00020

What happens when the technology meant to protect marginalized voices ends up silencing them? Rebecca Dorn, a research assistant at USC Viterbi’s Information Sciences Institute (ISI) has uncovered how large language models (LLMs) that are used to moderate online content are failing queer communities by misinterpreting their language.

Related posts

Barclays bets on AI to cut costs and boost returns

Barclays bets on AI to cut costs and boost returns

February 11, 2026
Best Presidents’ Day Mattress Sales of 2026

Best Presidents’ Day Mattress Sales of 2026

February 11, 2026

Non-binary visibility and algorithmic bias

In the paper, “Non-Binary Gender Expression in Online Interactions,” Dorn, who is a fourth-year Ph.D. student in computer science at USC Viterbi School of Engineering looked at non-binary users on social media platforms like X (formerly Twitter) and found that they often receive less engagement—such as likes or followers—than their binary counterparts. Additionally, their posts are frequently flagged as being more toxic by content moderation algorithms, even when they contain no harmful content.

Dorn presented these findings virtually at the 16th International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2024), held in Calabria, Italy from September 2-5, 2024.

The research revealed that non-binary users tend to be less active on platforms like X, potentially due to their underrepresentation in social media data, and that non-binary users receive fewer likes, retweets, and followers than binary users. This lack of visibility is alarming, as it can lead to non-binary voices being marginalized in important conversations, limiting their social influence and hindering their ability to advocate for issues important to their community.

Dorn’s research also uncovered a troubling trend: that tweets from non-binary users are more likely to be misclassified as toxic. Dorn said, “We found that the less representation of a gender group, the higher the toxicity scores for their tweets.”

The researchers posit that is likely the result of bias in the algorithms, which mistakenly interpret language commonly used in queer communities as harmful. This aligns with prior evidence showing that social media content from gender-variant groups, such as drag queens, is disproportionately flagged as hate speech, further highlighting the need for more nuanced and fair content moderation systems.

It was this finding that led to her follow-up paper, “Harmful Speech Detection by Language Models Exhibits Gender-Queer Dialect Bias.” The findings are published on the arXiv preprint server.

The problem with reclaimed slurs

In this second paper, Dorn and her co-author Lee Kezar, also a Ph.D. student in computer science at USC Viterbi, explored how LLMs routinely mislabel non-binary and queer speech—particularly the use of reclaimed slurs—as harmful. Reclaimed slurs, once used as insults, have been repurposed by the LGBTQ+ community as symbols of pride and empowerment.

However, AI-powered content moderation systems are failing to grasp these nuances, often mistaking empowering language for offensive content and silencing the voices of those they aim to protect.

“We found that existing models tend to flag these terms, even when they are not used in harmful ways. It’s frustrating because it means that these systems are reinforcing the marginalization of these communities,” Dorn explained.

“Queer people often use reclaimed slurs in ways that are affirming and positive, but the models aren’t able to detect that context. That’s a problem when those same models are being used to moderate platforms where queer voices are already marginalized.”

To investigate this issue, Dorn and Kezar created QueerReclaimLex, a dataset of non-derogatory uses of LGBTQ+ slurs, annotated by gender-queer individuals. They tested five popular language models, revealing that these systems were often unable to discern the positive or neutral context of these terms when used by the very people they are intended to represent.

Across all models tested, the systems struggled the most when trying to identify reclaimed slurs used in a positive or neutral way by queer individuals. In some cases, the models were right less than 24% of the time, showing just how poorly they understood the context of these words.

What’s next?

Dorn’s work highlights a critical issue in AI-driven content moderation: while these systems are designed to protect users from harmful speech, they frequently misinterpret the language of historically marginalized communities, particularly queer and non-binary individuals. As these models continue to shape the digital spaces where these communities gather for support and self-expression, addressing these biases is essential.

ISI Senior Principal Scientist Kristina Lerman, who is a Research Professor in the USC Viterbi School of Engineering’s Thomas Lord Department of Computer Science and a co-author of both papers underscored the importance of this research, “This work reminds us as researchers that we cannot blindly trust the outputs of our AI models. The observations we are making of the world—in this case, online speech in gender-queer communities—may not accurately reflect reality.”

More information:
Non-Binary Gender Expression in Online Interactions. imyday.github.io/pub/asonam202 … /papers/1207_094.pdf

Rebecca Dorn et al, Harmful Speech Detection by Language Models Exhibits Gender-Queer Dialect Bias, arXiv (2024). DOI: 10.48550/arxiv.2406.00020

Journal information:
arXiv

Provided by
University of Southern California

Citation:
Study finds bias in language models against non-binary users (2024, November 14)
retrieved 14 November 2024
from https://techxplore.com/news/2024-11-bias-language-binary-users.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Merck, facing threat to Keytruda, buys into new kind of cancer immunotherapy

Next Post

Successful US strikes on Houthi facilities and weapons systems

Next Post
Successful US strikes on Houthi facilities and weapons systems

Successful US strikes on Houthi facilities and weapons systems

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Botswana: Doctor-turned-entrepreneur seizes healthcare opportunities

Botswana: Doctor-turned-entrepreneur seizes healthcare opportunities

2 years ago
Deepseek AI model faces criticism over censorship and free speech limits

Deepseek AI model faces criticism over censorship and free speech limits

9 months ago
The First AI Movie Is Coming from Bollywood in 2025

The First AI Movie Is Coming from Bollywood in 2025

1 year ago
The Bronny James Health Scare Highlights Cardiac Arrest in Young Athletes

The Bronny James Health Scare Highlights Cardiac Arrest in Young Athletes

3 years ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.