Sunday, December 7, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

Literary character approach helps LLMs simulate more human-like personalities

Simon Osuji by Simon Osuji
October 29, 2025
in Artificial Intelligence
0
Literary character approach helps LLMs simulate more human-like personalities
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Study explores the extent to which LLMs can simulate human-like personalities
Figure outlining the team’s evaluation methodology aimed at catching the convergence of simulated personalities toward human personalities. Credit: Bai et al.

After the advent of ChatGPT, the use of large language models (LLMs) has become increasingly widespread worldwide. LLMs are artificial intelligence (AI) systems trained on large sets of written texts, which can rapidly process queries in various languages and generate responses that sometimes appear to be written by humans.

Related posts

As Key Talent Abandons Apple, Meet the New Generation of Leaders Taking On the Old Guard

As Key Talent Abandons Apple, Meet the New Generation of Leaders Taking On the Old Guard

December 7, 2025
ByteDance and DeepSeek Are Placing Very Different AI Bets

ByteDance and DeepSeek Are Placing Very Different AI Bets

December 7, 2025

As these systems become increasingly advanced, they could be used to realize virtual characters that simulate human personalities and behaviors. In addition, several researchers are now conducting psychology and behavioral science research involving LLMs, for instance, testing their performance on specific tasks and comparing it to that of humans.

Researchers at Hebei Petroleum University of Technology and Beijing Institute of Technology recently carried out a study aimed at assessing the ability of LLMs to simulate human personality traits and behaviors. Their paper, published on the arXiv preprint server, introduces a new framework to assess the consistency and realism of constructed identities (i.e., personas) or characters expressed by LLMs, while also reporting several important findings—including the discovery of a scaling law governing persona realism.

“Using LLMs to drive social simulations is clearly a major research frontier,” Tianyu Huang, co-author of the paper, told Tech Xplore. “Compared with controlled experiments in natural sciences, social experiments are costly—sometimes even historically costly for humankind. Even for much smaller-scale domains like business or public policy, the potential applications are vast.

“From the perspective of LLM research itself, these models already exhibit impressive mathematical and logical abilities. Some studies even suggest that they internalize temporal and spatial concepts. Whether LLMs can further infer human attributes and thus engage with the humanities represents another major question.”

Study explores the extent to which LLMs can simulate human-like personalities
Figure outlining marginal density in the convergence of simulated personalities toward human personalities. Credit: Bai et al.

A key challenge in the emulation of human-like traits and abilities using LLMs is the systematic bias often exhibited by existing models. Most earlier works tried to tackle this problem case by case, for instance by adjusting identifiable biases in training datasets or individual outputs produced by models. In contrast, Huang and his colleagues set out to develop a general framework that would address the root causes of LLM biases.

“First, we point out a methodological misconception in the current literature, namely that many researchers directly apply psychometric validity testing methods developed for humans to assess LLMs’ personality simulation,” explained Yuqi Bai, co-author of the paper. “We argue this is a categorical mismatch. Our approach steps back to a broader view—focusing not on isolated validity metrics but on the overall patterns.”

As part of their study, the researchers tried to determine if the statistical characteristics of the personalities simulated by LLMs converged with the patterns observed in humans. Rather than trying to pin-point the characteristics that LLM and human personalities currently have in common, the team hoped to outline a path or a set of variables that would lead to the gradual convergence of AI and human personalities.

“Our study went through a period of deep confusion,” said Bai. “Using LLM-generated persona profiles initially led to strong systematic biases, and prompt engineering showed limited effect—just as others had found. Progress stalled. Then, during a team discussion, we realized that when LLMs generate persona profiles, they often behave as if writing a résumé—highlighting positive traits and suppressing negatives.”

Eventually, Huang, Bai and their colleagues decided to assess the personalities that LLMs would convey in novels. As fictional literary works are often effective in capturing the complexity of human emotions and behavior, they asked LLMs to write their own novels.

Study explores the extent to which LLMs can simulate human-like personalities
Figure outlining the age curve in the convergence of simulated personalities toward human personalities. Credit: Bai et al.

“This became our third population-level experiment, and the results were remarkable, as the systematic bias was drastically reduced,” said Bai. “Later experiments using Wikipedia literary characters showed simulated personality distributions converging much closer to human data. The conclusion was clear: detail and realism can overcome systematic bias.”

The findings gathered by these researchers suggest that LLMs can partially emulate human personality traits. Moreover, these models’ ability to simulate realistic personas was found to improve when they were provided with richer and more detailed descriptions of the ‘virtual character’ they were expected to be.

“Our main contribution is identifying persona detail level as the key variable determining the effectiveness of LLM-driven social simulations,” explained Kun Sun, co-author of the paper.

“From an application perspective, social platforms and LLM API providers already possess massive, detail-rich user profile data—forming a powerful foundation for social simulation. This presents both tremendous commercial potential and serious ethical and privacy concerns. Preventing manipulative control and safeguarding human autonomy are therefore critical challenges.”

In the future, this recent study could inform the development of conversational AI agents or virtual characters that realistically simulate specific personas. In addition, it could inspire research exploring the risks of AI-simulated personas and introduce methods to limit or detect the unethical use of LLM-based virtual characters.

Meanwhile, the team plans to further investigate the scaling law guiding the LLM simulation of human personalities. For instance, they would like to train models on richer persona datasets or employ more sophisticated data management tools.

“We also plan to explore whether similar scaling phenomena appear in other human-like traits such as values,” added Sun and Yuting Chen. “Use linear regression-based probing techniques to examine whether LLMs have internalized prior distributions about human attributes within their latent representations. Understanding this implicit world model may reveal the underlying mechanism behind human traits simulation.”

Written for you by our author Ingrid Fadelli, edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.

More information:
Yuqi Bai et al, Scaling Law in LLM Simulated Personality: More Detailed and Realistic Persona Profile Is All You Need, arXiv (2025). DOI: 10.48550/arxiv.2510.11734

Journal information:
arXiv

© 2025 Science X Network

Citation:
Literary character approach helps LLMs simulate more human-like personalities (2025, October 29)
retrieved 29 October 2025
from https://techxplore.com/news/2025-10-literary-character-approach-llms-simulate.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

AVEVA to showcase industrial intelligence platform at ADIPEC

Next Post

Nigeria secures N700 billion to deploy two million electricity meters each year

Next Post
Nigeria secures N700 billion to deploy two million electricity meters each year

Nigeria secures N700 billion to deploy two million electricity meters each year

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

GCC infrastructure firms to hold up against 2024 refinancing needs: S&P

GCC infrastructure firms to hold up against 2024 refinancing needs: S&P

2 years ago
OpenAI pens response to Musk lawsuit: ‘We’re sad that it’s come to this’

OpenAI pens response to Musk lawsuit: ‘We’re sad that it’s come to this’

2 years ago
No more importation of refined petroleum products in Nigeria – Kyari

No more importation of refined petroleum products in Nigeria – Kyari

1 year ago
The 209 Best Prime Day Deals, Tested and Tracked By Our Team

The 209 Best Prime Day Deals, Tested and Tracked By Our Team

1 year ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.