Saturday, May 10, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

World leaders still need to wake up to AI risks, say leading experts ahead of AI Safety Summit

Simon Osuji by Simon Osuji
May 20, 2024
in Artificial Intelligence
0
World leaders still need to wake up to AI risks, say leading experts ahead of AI Safety Summit
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


ai
Credit: CC0 Public Domain

Leading AI scientists are calling for stronger action on AI risks from world leaders, warning that progress has been insufficient since the first AI Safety Summit in Bletchley Park six months ago.

Related posts

AI-powered headphones offer group translation with voice cloning and 3D spatial audio

AI-powered headphones offer group translation with voice cloning and 3D spatial audio

May 10, 2025
The 21 Best Early Amazon Pet Day Deals (2025)

The 21 Best Early Amazon Pet Day Deals (2025)

May 10, 2025

Then, the world’s leaders pledged to govern AI responsibly. However, as the second AI Safety Summit in Seoul (21–22 May) approaches, 25 of the world’s leading AI scientists say not enough is actually being done to protect us from the technology’s risks. In an expert consensus paper published in Science, they outline urgent policy priorities that global leaders should adopt to counteract the threats from AI technologies.

Professor Philip Torr, Department of Engineering Science, University of Oxford, a co-author on the paper, says, “The world agreed during the last AI summit that we needed action, but now it is time to go from vague proposals to concrete commitments. This paper provides many important recommendations for what companies and governments should commit to do.”

World’s response not on track in face of potentially rapid AI progress;

According to the paper’s authors, it is imperative that world leaders take seriously the possibility that highly powerful generalist AI systems—outperforming human abilities across many critical domains—-will be developed within the current decade or the next.

They say that although governments worldwide have been discussing frontier AI and made some attempt at introducing initial guidelines, this is simply incommensurate with the possibility of rapid, transformative progress expected by many experts.

Current research into AI safety is seriously lacking, with only an estimated 1–3% of AI publications concerning safety. Additionally, we have neither the mechanisms or institutions in place to prevent misuse and recklessness, including regarding the use of autonomous systems capable of independently taking actions and pursuing goals.

World-leading AI experts issue call to action

In light of this, an international community of AI pioneers has issued an urgent call to action. The co-authors include Geoffrey Hinton, Andrew Yao, Dawn Song, the late Daniel Kahneman; in total 25 of the world’s leading academic experts in AI and its governance. The authors hail from the US, China, EU, UK, and other AI powers, and include Turing award winners, Nobel laureates, and authors of standard AI textbooks.

This article is the first time that such a large and international group of experts have agreed on priorities for global policy makers regarding the risks from advanced AI systems.

Urgent priorities for AI governance

The authors recommend governments to:

  • establish fast-acting, expert institutions for AI oversight and provide these with far greater funding than they are due to receive under almost any current policy plan. As a comparison, the US AI Safety Institute currently has an annual budget of $10 million, while the US Food and Drug Administration (FDA) has a budget of $6.7 billion.
  • mandate much more rigorous risk assessments with enforceable consequences, rather than relying on voluntary or underspecified model evaluations.
  • require AI companies to prioritize safety, and to demonstrate their systems cannot cause harm. This includes using “safety cases” (used for other safety-critical technologies such as aviation) which shifts the burden for demonstrating safety to AI developers.
  • implement mitigation standards commensurate to the risk-levels posed by AI systems. An urgent priority is to set in place policies that automatically trigger when AI hits certain capability milestones. If AI advances rapidly, strict requirements automatically take effect, but if progress slows, the requirements relax accordingly.

According to the authors, for exceptionally capable future AI systems, governments must be prepared to take the lead in regulation. This includes licensing the development of these systems, restricting their autonomy in key societal roles, halting their development and deployment in response to worrying capabilities, mandating access controls, and requiring information security measures robust to state-level hackers, until adequate protections are ready.

AI impacts could be catastrophic

AI is already making rapid progress in critical domains such as hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. To advance undesirable goals, AI systems could gain human trust, acquire resources, and influence key decision-makers.

To avoid human intervention, they could be capable of copying their algorithms across global server networks. Large-scale cybercrime, social manipulation, and other harms could escalate rapidly.

In open conflict, AI systems could autonomously deploy a variety of weapons, including biological ones. Consequently, there is a very real chance that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.

Stuart Russell OBE, Professor of Computer Science at the University of California at Berkeley and an author of the world’s standard textbook on AI, says, “This is a consensus paper by leading experts, and it calls for strict regulation by governments, not voluntary codes of conduct written by industry.

“It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless. Companies will complain that it’s too hard to satisfy regulations—that ‘regulation stifles innovation.’ That’s ridiculous. There are more regulations on sandwich shops than there are on AI companies.”

More information:
Yoshua Bengio et al, Managing extreme AI risks amid rapid progress, Science (2024). DOI: 10.1126/science.adn0117. www.science.org/doi/10.1126/science.adn0117

Provided by
University of Oxford

Citation:
World leaders still need to wake up to AI risks, say leading experts ahead of AI Safety Summit (2024, May 20)
retrieved 20 May 2024
from https://techxplore.com/news/2024-05-world-leaders-ai-experts-safety.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Discover the Pulse of Lagos with Lagos Post Online

Next Post

Bitcoin Whales Accumulate Over $16B In BTC Since Bitcoin ETF Launch

Next Post
Bitcoin Whales Accumulate Over $16B In BTC Since Bitcoin ETF Launch

Bitcoin Whales Accumulate Over $16B In BTC Since Bitcoin ETF Launch

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

The Best Paper Notebooks and Journals, Tested and Reviewed (2024): Leuchttherm, Moleskine, Midori

The Best Paper Notebooks and Journals, Tested and Reviewed (2024): Leuchttherm, Moleskine, Midori

8 months ago
Halozyme drops Evotec buyout bid; Patient dies in Neurogene trial

J&J files a potential blockbuster; Lykos shakes up its board

4 months ago
Frieze London to debut makeover with ‘artist-centred’ curated sections

Frieze London to debut makeover with ‘artist-centred’ curated sections

7 months ago
JG AFRIKA EXCELS IN PRESTIGIOUS INDUSTRY AWARDS

JG AFRIKA EXCELS IN PRESTIGIOUS INDUSTRY AWARDS

1 year ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.