• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

Nobody wants to talk about AI safety. Instead, they cling to five comforting myths

Simon Osuji by Simon Osuji
February 12, 2025
in Artificial Intelligence
0
Nobody wants to talk about AI safety. Instead, they cling to five comforting myths
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


AI smart
Credit: Unsplash/CC0 Public Domain

This week, France hosted an AI Action Summit in Paris to discuss burning questions around artificial intelligence (AI), such as how people can trust AI technologies and how the world can govern them.

Related posts

Best Red Light Therapy for Hair Growth: WIRED-Approved (2026)

Best Red Light Therapy for Hair Growth: WIRED-Approved (2026)

March 1, 2026
US and Israel Launch Strikes Against Iran

US and Israel Launch Strikes Against Iran

March 1, 2026

Sixty countries, including France, China, India, Japan, Australia and Canada, signed a declaration for “inclusive and sustainable” AI. The United Kingdom and United States notably refused to sign, with the UK saying the statement failed to address global governance and national security adequately, and US Vice President JD Vance criticizing Europe’s “excessive regulation” of AI.

Critics say the summit sidelined safety concerns in favor of discussing commercial opportunities.

Last week, I attended the inaugural AI safety conference held by the International Association for Safe & Ethical AI, also in Paris, where I heard talks by AI luminaries Geoffrey Hinton, Yoshua Bengio, Anca Dragan, Margaret Mitchell, Max Tegmark, Kate Crawford, Joseph Stiglitz and Stuart Russell.

As I listened, I realized the disregard for AI safety concerns among governments and the public rests on a handful of comforting myths about AI that are no longer true—if they ever were.

1: Artificial general intelligence isn’t just science fiction

The most severe concerns about AI—that it could pose a threat to human existence—typically involve so-called artificial general intelligence (AGI). In theory, AGI will be far more advanced than current systems.

AGI systems will be able to learn, evolve and modify their own capabilities. They will be able to undertake tasks beyond those for which they were originally designed, and eventually surpass human intelligence.

AGI does not exist yet, and it is not certain it will ever be developed. Critics often dismiss AGI as something that belongs only in science fiction movies. As a result, the most critical risks are not taken seriously by some and are seen as fanciful by others.

However, many experts believe we are close to achieving AGI. Developers have suggested that, for the first time, they know what technical tasks are required to achieve the goal.

AGI will not stay solely in sci-fi forever. It will eventually be with us, and likely sooner than we think.

2: We already need to worry about current AI technologies

Given the most severe risks are often discussed in relation to AGI, there is often a misplaced belief we do not need to worry too much about the risks associated with contemporary “narrow” AI.

However, current AI technologies are already causing significant harm to humans and society. This includes through obvious mechanisms such as fatal road and aviation crashes, warfare, cyber incidents, and even encouraging suicide.

AI systems have also caused harm in more oblique ways, such as election interference, the replacement of human work, biased decision-making, deepfakes, and disinformation and misinformation.

According to MIT’s AI Incident Tracker, the harms caused by current AI technologies are on the rise. There is a critical need to manage current AI technologies as well as those that might appear in future.

3: Contemporary AI technologies are ‘smarter’ than we think

A third myth is that current AI technologies are not actually that clever and hence are easy to control. This myth is most often seen when discussing the large language models (LLMs) behind chatbots such as ChatGPT, Claude and Gemini.

There is plenty of debate about exactly how to define intelligence and whether AI technologies truly are intelligent, but for practical purposes these are distracting side issues. It is enough that AI systems behave in unexpected ways and create unforeseen risks.

For example, existing AI technologies have been found to engage in behaviors that most people would not expect from non-intelligent entities. These include deceit, collusion, hacking, and even acting to ensure their own preservation.

Whether these behaviors are evidence of intelligence is a moot point. The behaviors may cause harm to humans either way.

What matters is that we have the controls in place to prevent harmful behavior. The idea that “AI is dumb” isn’t helping anyone.

4: Regulation alone is not enough

Many people concerned about AI safety have advocated for AI safety regulations.

Last year the European Union’s AI Act, representing the world’s first AI law, was widely praised. It built on already established AI safety principles to provide guidance around AI safety and risk.

While regulation is crucial, it is not all that’s required to ensure AI is safe and beneficial. Regulation is only part of a complex network of controls required to keep AI safe.

These controls will also include codes of practice, standards, research, education and training, performance measurement and evaluation, procedures, security and privacy controls, incident reporting and learning systems, and more. The EU AI act is a step in the right direction, but a huge amount of work is still required to develop the appropriate mechanisms required to ensure it works.

5: It’s not just about the AI

The fifth and perhaps most entrenched myth centers around the idea that AI technologies themselves create risk.

AI technologies form one component of a broader “sociotechnical” system. There are many other essential components: humans, other technologies, data, artifacts, organizations, procedures and so on.

Safety depends on the behavior of all these components and their interactions. This “systems thinking” philosophy demands a different approach to AI safety.

Instead of controlling the behavior of individual components of the system, we need to manage interactions and emergent properties.

With AI agents on the rise—AI systems with more autonomy and the ability to carry out more tasks—the interactions between different AI technologies will become increasingly important.

At present, there has been little work examining these interactions and the risks that could arise in the broader sociotechnical system in which AI technologies are deployed. AI safety controls are required for all interactions within the system, not just the AI technologies themselves.

AI safety is arguably one of the most important challenges our societies face. To get anywhere in addressing it, we will need a shared understanding of what the risks really are.

Provided by
The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Nobody wants to talk about AI safety. Instead, they cling to five comforting myths (2025, February 12)
retrieved 12 February 2025
from https://techxplore.com/news/2025-02-ai-safety-comforting-myths.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Gallery: Exercise Aman 2025 International Fleet Review

Next Post

Nigeria to Host 32nd Afreximbank Annual Meetings from 23 to 28 June 2025

Next Post
Nigeria to Host 32nd Afreximbank Annual Meetings from 23 to 28 June 2025

Nigeria to Host 32nd Afreximbank Annual Meetings from 23 to 28 June 2025

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Surprise Reason Behind Stock Breakout?

Surprise Reason Behind Stock Breakout?

2 years ago
Subscription-Based RIA Services Platform Rossby Adds 3 Teams

Subscription-Based RIA Services Platform Rossby Adds 3 Teams

2 years ago
Hegseth orders elimination of 10% of general, admiral jobs

Hegseth orders elimination of 10% of general, admiral jobs

10 months ago
Justice at stake as generative AI enters the courtroom

Justice at stake as generative AI enters the courtroom

8 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • Mahama attends Liberia’s 178th independence anniversary

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.