Sunday, June 1, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

What lawmakers can learn from South Korea’s AI hate-speech disaster

Simon Osuji by Simon Osuji
January 31, 2025
in Artificial Intelligence
0
What lawmakers can learn from South Korea’s AI hate-speech disaster
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


chatbot
Credit: Pixabay/CC0 Public Domain

As artificial intelligence technologies develop at accelerated rates, the methods of governing companies and platforms continue to raise ethical and legal concerns.

Related posts

Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

May 31, 2025
Nike x Hyperice Hyperboot Review: Wearable Post-Run Recovery

Nike x Hyperice Hyperboot Review: Wearable Post-Run Recovery

May 31, 2025

In Canada, many view proposed laws to regulate AI offerings as attacks on free speech and as overreaching government control on tech companies. This backlash has come from free speech advocates, right-wing figures and libertarian thought leaders.

However, these critics should pay attention to a harrowing case from South Korea that offers important lessons about the risks of public-facing AI technologies and the critical need for user data protection.

In late 2020, Iruda (or “Lee Luda”), an AI chatbot, quickly became a sensation in South Korea. AI chatbots are computer programs that simulate conversation with humans. In this case, the chatbot was designed as a 21-year-old female college student with a cheerful personality. Marketed as an exciting “AI friend,” Iruda attracted more than 750,000 users in under a month.

But within weeks, Iruda became an ethics case study and a catalyst for addressing a lack of data governance in South Korea. She soon started to say troubling things and express hateful views. The situation was accelerated and exacerbated by the growing culture of digital sexism and sexual harassment online.

Making a sexist, hateful chatbot

Scatter Lab, the tech startup that created Iruda, had already developed popular apps that analyzed emotions in text messages and offered dating advice. The company then used data from these apps to train Iruda’s abilities in intimate conversations. But it failed to fully disclose to users that their intimate messages would be used to train the chatbot.

The problems began when users noticed Iruda repeating private conversations verbatim from the company’s dating advice apps. These responses included suspiciously real names, credit card information and home addresses, leading to an investigation.

The chatbot also began expressing discriminatory and hateful views. Investigations by media outlets found this occurred after some users deliberately “trained” it with toxic language. Some users even created user guides on how to make Iruda a “sex slave” on popular online men’s forums. Consequently, Iruda began answering user prompts with sexist, homophobic and sexualized hate speech.

This raised serious concerns about how AI and tech companies operate. The Iruda incident also raises concerns beyond policy and law for AI and tech companies. What happened with Iruda needs to be examined within a broader context of online sexual harassment in South Korea.

A pattern of digital harassment

South Korean feminist scholars have documented how digital platforms have become battlegrounds for gender-based conflicts, with co-ordinated campaigns targeting women who speak out on feminist issues. Social media amplifies these dynamics, creating what Korean American researcher Jiyeon Kim calls “networked misogyny.”

South Korea, home to the radical feminist 4B movement (which stands for four types of refusal against men: no dating, marriage, sex or children), provides an early example of the intensified gender-based conversations that are commonly seen online worldwide. As journalist Hawon Jung points out, the corruption and abuse exposed by Iruda stemmed from existing social tensions and legal frameworks that refused to address online misogyny. Jung has written extensively on the decades-long struggle to prosecute hidden cameras and revenge porn.

Beyond privacy: The human cost

Of course, Iruda was just one incident. The world has seen numerous other cases that demonstrate how seemingly harmless applications like AI chatbots can become vehicles for harassment and abuse without proper oversight.

These include Microsoft’s Tay.ai in 2016, which was manipulated by users to spout antisemitic and misogynistic tweets. More recently, a custom chatbot on Character.AI was linked to a teen’s suicide.

Chatbots—that appear as likable characters that feel increasingly human with rapid technology advancements—are uniquely equipped to extract deeply personal information from their users.

These attractive and friendly AI figures exemplify what technology scholars Neda Atanasoski and Kalindi Vora describe as the logic of “surrogate humanity”—where AI systems are designed to stand in for human interaction but end up amplifying existing social inequalities.

AI ethics

In South Korea, Iruda’s shutdown sparked a national conversation about AI ethics and data rights. The government responded by creating new AI guidelines and fining Scatter Lab 103 million won ($110,000 CAD).

However, Korean legal scholars Chea Yun Jung and Kyun Kyong Joo note these measures primarily emphasized self-regulation within the tech industry rather than addressing deeper structural issues. It did not address how Iruda became a mechanism through which predatory male users disseminated misogynist beliefs and gender-based rage through deep learning technology.

Ultimately, looking at AI regulation as a corporate issue is simply not enough. The way these chatbots extract private data and build relationships with human users means that feminist and community-based perspectives are essential for holding tech companies accountable.

Since this incident, Scatter Lab has been working with researchers to demonstrate the benefits of chatbots.

Canada needs strong AI policy

In Canada, the proposed Artificial Intelligence and Data Act and Online Harms Act are still being shaped, and the boundaries of what constitutes a “high-impact” AI system remain undefined.

The challenge for Canadian policymakers is to create frameworks that protect innovation while preventing systemic abuse by developers and malicious users. This means developing clear guidelines about data consent, implementing systems to prevent abuse, and establishing meaningful accountability measures.

As AI becomes more integrated into our daily lives, these considerations will only become more critical. The Iruda case shows that when it comes to AI regulation, we need to think beyond technical specifications and consider the very real human implications of these technologies.

Provided by
The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
From chatbot to sexbot: What lawmakers can learn from South Korea’s AI hate-speech disaster (2025, January 30)
retrieved 31 January 2025
from https://techxplore.com/news/2025-01-chatbot-sexbot-lawmakers-south-korea.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

ADNHC outlook “promising” due to Abu Dhabi population and tourism growth

Next Post

Can the African Union be saved from itself?

Next Post
Can the African Union be saved from itself?

Can the African Union be saved from itself?

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

UK and Norway’s North Sea Partnership for a Clean Energy Revolution

UK and Norway’s North Sea Partnership for a Clean Energy Revolution

7 months ago
Los Angeles demands robotaxi rulemaking powers

Los Angeles demands robotaxi rulemaking powers

2 years ago
Biohaven muscle drug misses goal of SMA study, but advances in obesity

Biohaven muscle drug misses goal of SMA study, but advances in obesity

6 months ago
Algeria Expanding Military To Face Regional Threats

Algeria Expanding Military To Face Regional Threats

2 weeks ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.