• Business
  • Energy
  • Markets
  • Intelligence
    • Policy Intelligence
    • Fashion Intelligence
    • Economic Intelligence
    • Security Intelligence
  • Technology
  • Infrastructure
  • Politics
  • LBNN Blueprints
  • Business
  • Energy
  • Markets
  • Intelligence
    • Policy Intelligence
    • Fashion Intelligence
    • Economic Intelligence
    • Security Intelligence
  • Technology
  • Infrastructure
  • Politics
  • LBNN Blueprints
LIVE MARKETS
Initializing...
Home Artificial Intelligence

Turmoil at OpenAI shows we must address whether AI developers can regulate themselves

Simon Osuji by Simon Osuji
December 5, 2023
in Artificial Intelligence
0
Turmoil at OpenAI shows we must address whether AI developers can regulate themselves
0
SHARES
2
VIEWS
Share on FacebookShare on Twitter


OpenAI
Credit: Unsplash/CC0 Public Domain

OpenAI, developer of ChatGPT and a leading innovator in the field of artificial intelligence (AI), was recently thrown into turmoil when its chief-executive and figurehead, Sam Altman, was fired. As it was revealed that he would be joining Microsoft’s advanced AI research team, more than 730 OpenAI employees threatened to quit. Finally, it was announced that most of the board who had terminated Altman’s employment were being replaced, and that he would be returning to the company.

In the background, there have been reports of vigorous debates within OpenAI regarding AI safety. This not only highlights the complexities of managing a cutting-edge tech company, but also serves as a microcosm for broader debates surrounding the regulation and safe development of AI technologies.

Large language models (LLMs) are at the heart of these discussions. LLMs, the technology behind AI chatbots such as ChatGPT, are exposed to vast sets of data that help them improve what they do—a process called training. However, the double-edged nature of this training process raises critical questions about fairness, privacy, and the potential misuse of AI.

Training data reflects both the richness and biases of the information available. The biases may reflect unjust social concepts and lead to serious discrimination, the marginalizing of vulnerable groups, or the incitement of hatred or violence.

Training datasets can be influenced by historical biases. For example, in 2018 Amazon was reported to have scrapped a hiring algorithm that penalized women—seemingly because its training data was composed largely of male candidates.

LLMs also tend to exhibit different performance for different social groups and different languages. There is more training data available in English than in other languages, so LLMs are more fluent in English.

Can companies be trusted?

LLMs also pose a risk of privacy breaches since they are absorbing huge amounts of information and then reconstituting it. For example, if there is private data or sensitive information in the training data of LLMs, they may “remember” this data or make further inferences based on it, possibly leading to the leakage of trade secrets, the disclosure of health diagnoses, or the leakage of other types of private information.

LLMs might even enable attack by hackers or harmful software. Prompt injection attacks use carefully crafted instructions to make the AI system do something it wasn’t supposed to, potentially leading to unauthorized access to a machine, or to the leaking of private data. Understanding these risks necessitates a deeper look into how these models are trained, the inherent biases in their training data, and the societal factors that shape this data.

The drama at OpenAI has raised concerns about the company’s future and sparked discussions about the regulation of AI. For example, can companies where senior staff hold very different approaches to AI development be trusted to regulate themselves?

The rapid pace at which AI research makes it into real-world applications highlights the need for more robust and wide-ranging frameworks for governing AI development, and ensuring the systems comply with ethical standards.

When is an AI system ‘safe enough’?

But there are challenges whatever approach is taken to regulation. For LLM research, the transition time from research and development to the deployment of an application may be short. This makes it more difficult for third-party regulators to effectively predict and mitigate the risks. Additionally, the high technical skill threshold and computational costs required to train models or adapt them to specific tasks further complicates oversight.

Targeting early LLM research and training may be more effective in addressing some risks. It would help address some of the harms that originate in training data. But it’s important also to establish benchmarks: for instance, when is an AI system considered “safe enough”?

The “safe enough” performance standard may depend on which area it’s being used in, with stricter requirements in high-risk areas such as algorithms for the criminal justice system or hiring.

As AI technologies, particularly LLMs, become increasingly integrated into different aspects of society, the imperative to address their potential risks and biases grows. This involves a multifaceted strategy that includes enhancing the diversity and fairness of training data, implementing effective protections for privacy, and ensuring the responsible and ethical use of the technology across different sectors of society.

The next steps in this journey will likely involve collaboration between AI developers, regulatory bodies, and a diverse sample of the general public to establish standards and frameworks.

The situation at OpenAI, while challenging and not entirely edifying for the industry as a whole, also presents an opportunity for the AI research industry to take a long, hard look at itself, and innovate in ways that prioritize human values and societal well-being.

Provided by
The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Turmoil at OpenAI shows we must address whether AI developers can regulate themselves (2023, December 4)
retrieved 5 December 2023
from https://techxplore.com/news/2023-12-turmoil-openai-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

More than 2,400 fossil fuel-linked delegates at COP28

Next Post

Want to Store a Message in DNA? That’ll Be $1,000

Next Post
Want to Store a Message in DNA? That’ll Be $1,000

Want to Store a Message in DNA? That’ll Be $1,000

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

  • Mahama attends Liberia’s 178th independence anniversary

    Mahama attends Liberia’s 178th independence anniversary

    0 shares
    Share 0 Tweet 0
  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2026 JNews - Premium WordPress news & magazine theme by Jegtheme.