Tuesday, July 22, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

Groups of AI agents spontaneously form their own social norms without human help, study suggests

Simon Osuji by Simon Osuji
May 14, 2025
in Artificial Intelligence
0
Groups of AI agents spontaneously form their own social norms without human help, study suggests
0
SHARES
2
VIEWS
Share on FacebookShare on Twitter


chatgpt
Credit: Unsplash/CC0 Public Domain

A new study suggests that populations of artificial intelligence (AI) agents, similar to ChatGPT, can spontaneously develop shared social conventions through interaction alone.

Related posts

Scalable transformer accelerator enables on-device execution of large language models

Scalable transformer accelerator enables on-device execution of large language models

July 22, 2025
How WIRED Analyzed the Epstein Video

How WIRED Analyzed the Epstein Video

July 22, 2025

The research from City St George’s, University of London and the IT University of Copenhagen suggests that when these large language model (LLM) artificial intelligence (AI) agents communicate in groups, they do not just follow scripts or repeat patterns, but self-organize, reaching consensus on linguistic norms much like human communities.

The study, “Emergent Social Conventions and Collective Bias in LLM Populations,” is published in the journal Science Advances.

LLMs are powerful deep learning algorithms that can understand and generate human language, with the most famous to date being ChatGPT.

“Most research so far has treated LLMs in isolation,” said lead author Ariel Flint Ashery, a doctoral researcher at City St George’s, “but real-world AI systems will increasingly involve many interacting agents. We wanted to know: can these models coordinate their behavior by forming conventions, the building blocks of a society? The answer is yes, and what they do together can’t be reduced to what they do alone.”

In the study, the researchers adapted a classic framework for studying social conventions in humans, based on the “naming game” model of convention formation.

In their experiments, groups of LLM agents ranged in size from 24 to 200 individuals, and in each experiment, two LLM agents were randomly paired and asked to select a “name” (e.g., an alphabet letter, or a random string of characters) from a shared pool of options. If both agents selected the same name, they earned a reward; if not, they received a penalty and were shown each other’s choices.

Agents only had access to a limited memory of their own recent interactions—not of the full population—and were not told they were part of a group. Over many such interactions, a shared naming convention could spontaneously emerge across the population, without any central coordination or predefined solution, mimicking the bottom-up way norms form in human cultures.

Even more strikingly, the team observed collective biases that couldn’t be traced back to individual agents.

“Bias doesn’t always come from within,” explained Andrea Baronchelli, Professor of Complexity Science at City St George’s and senior author of the study. “We were surprised to see that it can emerge between agents—just from their interactions. This is a blind spot in most current AI safety work, which focuses on single models.”

In a final experiment, the study illustrated how these emergent norms can be fragile: small, committed groups of AI agents can tip the entire group toward a new naming convention, echoing well-known tipping point effects—or “critical mass” dynamics—in human societies.

The study results were also robust in using four different types of LLM, called Llama-2-70b-Chat, Llama-3-70B-Instruct, Llama-3.1-70BInstruct, and Claude-3.5-Sonnet.

As LLMs begin to populate online environments—from social media to autonomous vehicles—the researchers envision their work as a steppingstone to further explore how human and AI reasoning both converge and diverge, with the goal of helping to combat some of the most pressing ethical dangers posed by LLM AIs propagating biases fed into them by society, which may harm marginalized groups.

Professor Baronchelli added, “This study opens a new horizon for AI safety research. It shows the depth of the implications of this new species of agents that have begun to interact with us—and will co-shape our future. Understanding how they operate is key to leading our coexistence with AI, rather than being subject to it. We are entering a world where AI does not just talk—it negotiates, aligns, and sometimes disagrees over shared behaviors, just like us.”

More information:
Emergent Social Conventions and Collective Bias in LLM Populations, Science Advances (2025). DOI: 10.1126/sciadv.adu9368

Provided by
City St George’s, University of London

Citation:
Groups of AI agents spontaneously form their own social norms without human help, study suggests (2025, May 14)
retrieved 14 May 2025
from https://techxplore.com/news/2025-05-groups-ai-agents-spontaneously-social.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Why the rich want specialists

Next Post

Critical Minerals Africa Group (CMAG) Webinar Explores Global Geopolitics and Africa’s Strategic Role in Critical Minerals

Next Post
Critical Minerals Africa Group (CMAG) Webinar Explores Global Geopolitics and Africa’s Strategic Role in Critical Minerals

Critical Minerals Africa Group (CMAG) Webinar Explores Global Geopolitics and Africa’s Strategic Role in Critical Minerals

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Conservation efforts help rare giraffes recover in Cameroon park – EnviroNews

Conservation efforts help rare giraffes recover in Cameroon park – EnviroNews

1 month ago
Own a Slice of the American Dream with a Rosati’s Pizza Franchise!

Own a Slice of the American Dream with a Rosati’s Pizza Franchise!

1 year ago
Africa Tech Festival Announces Leadership Council for 2025

Africa Tech Festival Announces Leadership Council for 2025

4 months ago
DRC challenges Rwanda’s Formula 1 bid, cautions against “blood-stained association”

DRC challenges Rwanda’s Formula 1 bid, cautions against “blood-stained association”

5 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Tanzania’s natural gas sector goes global with Dubai deal

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.