Sunday, December 7, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

The rise of humanlike chatbots detracts from developing AI for the human good

Simon Osuji by Simon Osuji
August 26, 2025
in Artificial Intelligence
0
The rise of humanlike chatbots detracts from developing AI for the human good
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


The rise of humanlike chatbots detracts from developing AI for the human good
AI companies claim to work towards equipping every person with an AI-assistant. Credit: Matheus Bertelli/Pexels, CC BY

Grok is a generative artificial intelligence (genAI) chatbot by xAI that, according to Elon Musk, is “the smartest AI in the world.” Grok’s latest upgrade is Ani, a porn-enabled anime girlfriend, recently joined by a boyfriend informed by “Twilight” and “50 Shades of Grey.”

Related posts

Jon M. Chu Says AI Couldn’t Have Made One of the Best Moments in ‘Wicked’

Jon M. Chu Says AI Couldn’t Have Made One of the Best Moments in ‘Wicked’

December 7, 2025
Anthropic’s Daniela Amodei Believes the Market Will Reward Safe AI

Anthropic’s Daniela Amodei Believes the Market Will Reward Safe AI

December 7, 2025

This summer, both xAI and OpenAI launched updated versions of their chatbots. Each touted improved performance, but more notably, new personalities. xAI introduced Ani; OpenAI rolled out a colder-by-default GPT-5 with four personas to replace its unfailingly sycophantic GPT-4o model.

Similar to claims made by Google DeepMind and Anthropic, both companies insist they’re building AI to “benefit all humanity” and “advance human comprehension.” Anthropic claims, at least rhetorically, to be doing so responsibly. But their design choices suggest otherwise.

Instead of equipping every person with an AI assistant—a research collaborator with Ph.D.-level intelligence—some of today’s leaders have released anthropomorphized AI systems that operate first as friends, lovers and therapists.

As researchers and experts in AI policy and impact, we argue that what’s being sold as scientific infrastructure increasingly resembles science fiction gone awry. These chatbots are engineered not as tools for discovery, but as companions designed to foster para-social, non-reciprocal bonds.

Human/non-human

The core problem is anthropomorphism: the projection of human traits onto non-human entities. As cognitive scientist Pascal Boyer explains, our minds are tuned to interpret even minimal cues in social terms. What once aided our ancestors’ survival now fuels AI companies by capturing the minds of their users.

When machines speak, gesture or simulate emotion, they trigger those same evolved instincts such that, instead of recognizing it as a machine, users perceive it like a human.

Nonetheless, AI companies have pushed on, building systems that exploit these biases. The justification is that this makes interaction feel seamless and intuitive. However, the consequences that result can render anthropomorphic design deceptive and dishonest.

Consequences of anthropomorphic design

In its mildest form, anthropomorphic design prompts users to respond as if there were another human on the other side of the exchange, and can be as simple as saying “thank you.”

The stakes grow higher when anthropomorphism leads users to believe the system is conscious: that it feels pain, reciprocates affection or understands their problems. Although new research reveals that it’s possible the criteria for consciousness may be met in the future, false attributions of consciousness and emotion have led to some extreme outcomes, such as leading users to marry their AI companions.

However, anthropomorphic design does not always inspire love. For others it has led to self-harm or harming others after forming unhealthy attachments.

Some users even behave as though AI could be humiliated or manipulated, lashing out abusively as if it were a human target. Recognizing this, Anthropic, the first company to hire an AI welfare expert, has given its Claude models the unusual capacity to end such conversations.

Across this spectrum, anthropomorphic design pulls users away from leveraging AI’s true capabilities, forcing us to confront the urgent question of whether anthropomorphism constitutes a design flaw—or more critically, a crisis.

De-anthropomorphizing AI

The obvious solution seems to be stripping AI systems of their humanity. American philosopher and cognitive scientist Daniel Dennett argued that this may be humanity’s only hope. But such a solution is far from simple because the anthropomorphization of these systems has already led users to form deep emotional attachments.

When OpenAI replaced GPT-4o with GPT-5 as the default in ChatGPT, some users expressed genuine distress and genuinely mourned the loss of 4o. However, what they mourned was the loss of its prior speech patterns and the way it used language.

This is what makes anthropomorphism such a problematic design model. As a result of the impressive language abilities of these systems, users attribute mentality to them—and their engineered personas exploit this further.

Instead of seeing the machine for what it is—impressively competent but not human—users read into its speech patterns. While AI pioneer Geoffrey Hinton warns that these systems may be dangerously competent, something much more insidious seems to result from the fact these systems are anthropomorphized.

Flaw in the design

AI companies are increasingly catering to people’s AI companion desires, whether sexbot or therapist.

Anthropomorphism is what makes these systems dangerous today because humans have intentionally built them to mimic us and exploit our instincts. If AI consciousness proves impossible, these design choices will be the cause of human suffering.

But in a hypothetical world in which AI does attain consciousness, our choice to force it into a human-shaped mind—for our own convenience and entertainment, replicated across the world’s data centers—may invent an entirely new kind, and scale, of suffering.

The real danger of anthropomorphic AI isn’t some near or distant future where machines take over. The danger is here, now, and hiding in the illusion that these systems are like us.

This is not the model that will “benefit all humanity” (as OpenAI promises) or “help us understand the universe” (as xAI’s Elon Musk claims). For the sake of social and scientific good, we must resist anthropomorphic design and begin the work of de-anthropomorphizing AI.

Provided by
The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
The rise of humanlike chatbots detracts from developing AI for the human good (2025, August 26)
retrieved 26 August 2025
from https://techxplore.com/news/2025-08-humanlike-chatbots-detracts-ai-human.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

After Genius Act, Focus Shifts To Clarity Act: US Senator

Next Post

The D Brief: Trump’s domestic QRF; New CNO’s priorities; Acquisition shakeups; DIU chief quits; And a bit more.

Next Post
The D Brief: Trump’s domestic QRF; New CNO’s priorities; Acquisition shakeups; DIU chief quits; And a bit more.

The D Brief: Trump’s domestic QRF; New CNO’s priorities; Acquisition shakeups; DIU chief quits; And a bit more.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Hedge Fund Firms Plan Strategic Investment and Business Shifts in Response to Geopolitical, Economic Factors

Hedge Fund Firms Plan Strategic Investment and Business Shifts in Response to Geopolitical, Economic Factors

2 years ago
Making landfills greener | Infrastructure news

Making landfills greener | Infrastructure news

1 year ago
Everything You Need to Know About Hybrid Cars

Everything You Need to Know About Hybrid Cars

2 years ago
IRS Employees Can’t Take Trump Buyout Offer, For Now

IRS Employees Can’t Take Trump Buyout Offer, For Now

10 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.