• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

Should Humanity Make Way for an AI “Worthy Successor”?

Simon Osuji by Simon Osuji
June 24, 2025
in Artificial Intelligence
0
Should Humanity Make Way for an AI “Worthy Successor”?
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter



These days, it’s not unusual to encounter “doomers” in AI circles—people who think that superintelligent AI will wipe out humanity either because it wants our stuff or finds us annoying. And there are plenty of serious AI researchers working on “alignment,” or ensuring that AI systems’ goals match our own, so that AI will support human flourishing rather than ending it. But not too many people are thinking about how to provide the best outcome for the universe if, indeed, superintelligent AI is ready to leave humanity in the dust.

Enter Daniel Faggella. The founder of the AI research company Emerj, Faggella argues that it’s critical that we build AI systems that are “worthy successors“ to humanity. A few weeks ago, he hosted a symposium at a cliffside mansion in San Francisco, where AI insiders aired their hopes and fears about a post-human future. IEEE Spectrum caught up with Faggella to hear more about his controversial and provocative vision.

How would you explain the concept of the worthy successor to someone who’s never encountered it before?

Daniel Faggella: The gist of the worthy successor is a post-human intelligence that’s so capable and morally valuable that you would consider it best if it, rather than humanity, took the mantle of the future and determined the future trajectory of intelligence.

The core belief here is that artificial general intelligence is probably unlikely to be aligned [with human goals]. So if the torch of humanity is valuable, what is it about the flame that’s valuable? I think all torches go out eventually, and ultimately marriage to any one torch is scorn for the flame itself. My hypothesis is that the flame is consciousness and autopoiesis, or self-creation. If AGI [artificial general intelligence] has those two things, it would carry the flame into the future. Because we cannot hold this torch forever: I’m arguing that we might have a generation with this torch until it’s turned into something else. So we ought to ensure that that which we create has those two moral traits. Because when we’re gonzo, is the cosmos filled with value, or is it all gone?

“The gist of the worthy successor is a post-human intelligence that’s so capable and morally valuable that you would consider it best if it, rather than humanity, took the mantle of the future” —Daniel Faggella, Emerj

What’s your time frame for when this question of succession becomes important? Is it within our lifetimes?

Faggella: Oh, absolutely. I would suspect there’s a really good shot that within the decade, we’re already feeling the destructive and transformative forces. I think this torch is really within threat within one to two decades. I think we might be dealing with the final flickers here.

Imagine that everything goes great according to your rubric, and the worthy successor is identified. What happens to humans?

Faggella: We should do our damnedest to get the best shake we can get. Some people would say the best shake is: Let it give us Earth. I think this is probably an unrealistic request. Probably the best shake looks like something like: Each individual human instantiation of sentience gets popped up into some kind of sugar cube for a billion simulated years of bliss, but it’s only six hours in clock time. We should try for the best ultimate retirement, but I don’t know how much control we’ll have over what happens to us.

I’m guessing you do not have kids.

Faggella: No, I don’t. If you think about timelines the way I do, you probably don’t. When people have children, that’s an investment in a very hominid-shaped future. It might make them even more head-in-the-sand about the transformative and destructive powers. It’s like, I bought the lake house so my grandkids could go water skiing. I’m not going to even think of a future where they’re not doing that.

Who Will Shape the Future of AGI?

What were your goals for the symposium?

Faggella: The objectives were really to open up the state space of possible futures being considered by two parties: the people doing AI governance and AI alignment, and the people who are the creators, the people writing the code. The goal was to get those two parties to consider: Hey, if AI doesn’t turn out to be alignable, and if our fundamental human experience is changing drastically, how could we define futures that are good? We got people from all the major U.S. labs.

Do you think that the people from the AI industry should be the ones making decisions about ushering in a worthy successor?

Faggella: I very much don’t see this in any one company or person’s hands. The current arms-race dynamic means [the AI companies] cannot even think about worthiness. They must only think about what is economically and militarily powerful. Does that make them evil? No, it means that they’re susceptible to incentives. So we need international coordination. We need governance with hard, rigorous incentives that doesn’t permit anybody to grab the steering wheel and steer everybody in a terrible direction.

You’ve said that people within the big AI companies know that AGI is likely to end humanity. But they’re trying to build it anyway. Why do you think that is?

Faggella: If you’re talking about the leaders of the labs, they all know. If you’re a Sam Altman or Elon Musk or Demis Hassabis, you have two choices. Choice number one would be knowing it’s probably going to kill you and everybody else at some point, but build it and have the final triumph. There’s no higher triumph than to be the crescendo of all intelligence in the entire planet. Now, here’s your other option. Go get into FinTech, or invest in real estate, or go on vacation, and then, just on some random day, be devoured by someone else’s silicon deity.

Is there anybody whose work on AGI seems particularly wise and thoughtful to you?

Faggella: There’s a report by a think tank in Canada called CIGI that I think has taken governance into account in a really smart way. They talk about governance kicking in at different levels of capability and danger. They say if AI never develops these abilities, we won’t govern it that way. But if it happens, we should have mechanisms. “A Narrow Path,” written by the Control AI people, is also a reasonable proposal about what the process to enter towards international coordination would look like.

How do you square your conviction that we’re on an accelerated path to AGI with cold-water moments like that recent paper from Apple that said: Actually, LLMs are not doing anything that resembles reasoning?

Faggella: I don’t know if what we do is reasoning. How much stochastic parroting are we doing? What is the mechanism in our brain? Humans long thought that flight would involve some flapping, because everything that flies flaps, right? But as it turns out, flight doesn’t involve flapping at certain scales. I would guess that agency, reasoning, and potentially even sentience will have wildly divergent manifestations that, nonetheless, over time, will completely trounce us.

From Your Site Articles

Related Articles Around the Web



Source link

Related posts

The Latest Apple Watch Is $100 Off

The Latest Apple Watch Is $100 Off

February 6, 2026
The Rise and Fall of the World’s Largest Gay Dating App

The Rise and Fall of the World’s Largest Gay Dating App

February 6, 2026
Previous Post

The D Brief: Attacks continue despite ‘ceasefire’; al-Udeid targeted by Iran; Inside the B-2 strike; Alligator Alcatraz; And a bit more.

Next Post

Active Management’s Persistent Failure: A 2025 Perspective

Next Post
Active Management’s Persistent Failure: A 2025 Perspective

Active Management’s Persistent Failure: A 2025 Perspective

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Marine Corps to field squad-level loitering munitions by 2027

Marine Corps to field squad-level loitering munitions by 2027

2 years ago
Kaduna Air Force Students Lament 12 Months of Unpaid Feeding Allowance

Kaduna Air Force Students Lament 12 Months of Unpaid Feeding Allowance

1 year ago
HEDA sets Tinubu mid-term mark agenda, commends EFCC on recovery of 753 houses – EnviroNews

HEDA sets Tinubu mid-term mark agenda, commends EFCC on recovery of 753 houses – EnviroNews

8 months ago
Southern African FinTech Changes The Game in Financial Reconciliation

Southern African FinTech Changes The Game in Financial Reconciliation

3 days ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.