• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
Home Artificial Intelligence

Multimodal LLMs and the human brain create object representations in similar ways, study finds

Simon Osuji by Simon Osuji
June 25, 2025
in Artificial Intelligence
0
Multimodal LLMs and the human brain create object representations in similar ways, study finds
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter


Multimodal LLMs create object representations similarly to the human brain, study finds
Schematics of the experiment and analysis methods. a, THINGS database and examples of object images with their language descriptions given at the bottom. b–d, Pipelines of mental embedding learning under the triplet odd-one-out paradigm for LLMs (b), MLLMs (c) and humans (d). e, Examples of prompts and responses for LLMs and MLLMs. f, Illustration of the SPoSE modelling approach. g, Illustration of the NSD dataset with dimension ratings for stimulus images. h, Overview of the comparisons between space of LLMs, human behaviour and brain activity. All images were replaced with similar images from Pixabay and Pexels under a Creative Commons license CC0. Credit: Nature Machine Intelligence (2025). DOI: 10.1038/s42256-025-01049-z

A better understanding of how the human brain represents objects that exist in nature, such as rocks, plants, animals, and so on, could have interesting implications for research in various fields, including psychology, neuroscience and computer science. Specifically, it could help shed new light on how humans interpret sensory information and complete different real-world tasks, which could also inform the development of artificial intelligence (AI) techniques that closely emulate biological and mental processes.

Multimodal large language models (LLMs), such as the latest models underpinning the functioning of the popular conversational platform ChatGPT, have been found to be highly effective computational techniques for the analysis and generation of texts in various human languages, images and even short videos.

As the texts and images generated by these models are often very convincing, to the point that they could appear to be human-created content, multimodal LLMs could be interesting experimental tools for studying the underpinnings of object representations.

Researchers at the Chinese Academy of Sciences recently carried out a study aimed at better understanding how multimodal LLMs represent objects, while also trying to determine whether the object representations that emerge in these models resemble those observed in humans. Their findings are published in Nature Machine Intelligence.

“Understanding how humans conceptualize and categorize natural objects offers critical insights into perception and cognition,” Changde Du, Kaicheng Fu and their colleagues wrote in their paper. “With the advent of large language models (LLMs), a key question arises: Can these models develop human-like object representations from linguistic and multimodal data?

“We combined behavioral and neuroimaging analyses to explore the relationship between object concept representations in LLMs and human cognition.”

Multimodal LLMs and the human brain create object representations in similar ways, study finds
Object dimensions illustrating their interpretability. Credit: Nature Machine Intelligence (2025). DOI: 10.1038/s42256-025-01049-z

As part of their study, the researchers specifically examined the object representations emerging in the LLM ChatGPT- 3.5 created by Open AI, and in the multi-modal LLM GeminiPro Vision 1.0 developed at Google DeepMind. They asked these models to complete simple tasks known as triplet judgments. For each of these tasks, they were presented with three objects and asked to select the two that more closely resembled each other.

“We collected 4.7 million triplet judgments from LLMs and multimodal LLMs to derive low-dimensional embeddings that capture the similarity structure of 1,854 natural objects,” wrote Du, Fu and their colleagues. “The resulting 66-dimensional embeddings were stable, predictive and exhibited semantic clustering similar to human mental representations. Remarkably, the dimensions underlying these embeddings were interpretable, suggesting that LLMs and multimodal LLMs develop human-like conceptual representations of objects.”

Using the large dataset of triplet judgments that they collected, the researchers computed low-dimensional embeddings. These are mathematical representations that outline the similarity between objects over various dimensions, placing similar objects closer to each other in an abstract space.

Notably, the researchers observed that the low-dimensional embeddings they attained reliably grouped objects into meaningful categories, such as “animals,” “plants,” and so on. They thus concluded that LLMs and multi-modal LLMs naturally organize objects similarly to how they are represented and categorized in the human mind.

“Further analysis showed strong alignment between model embeddings and neural activity patterns in brain regions such as the extra-striate body area, para-hippocampal place area, retro-splenial cortex and fusiform face area,” the team wrote. “This provides compelling evidence that the object representations in LLMs, although not identical to human ones, share fundamental similarities that reflect key aspects of human conceptual knowledge.”

Overall, the results gathered by Du, Fu and their colleagues suggest that human-like natural object representations could inherently emerge in LLMs and multi-modal LLMs after they are trained on large amounts of data. In the future, this study could inspire other research teams to explore how LLMs represent objects, while also potentially contributing to the further advancement of brain-inspired AI systems.

Written for you by our author Ingrid Fadelli,
edited by Lisa Lock
, and fact-checked and reviewed by Robert Egan —this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.

More information:
Changde Du et al, Human-like object concept representations emerge naturally in multimodal large language models, Nature Machine Intelligence (2025). DOI: 10.1038/s42256-025-01049-z

© 2025 Science X Network

Citation:
Multimodal LLMs and the human brain create object representations in similar ways, study finds (2025, June 25)
retrieved 25 June 2025
from https://techxplore.com/news/2025-06-multimodal-llms-human-brain-representations.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Egypt’s EGAS Awards Offshore Blocks to International Companies

Next Post

Funding Trends in African E-Mobility Startups

Next Post
Funding Trends in African E-Mobility Startups

Funding Trends in African E-Mobility Startups

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

  • Mahama attends Liberia’s 178th independence anniversary

    Mahama attends Liberia’s 178th independence anniversary

    0 shares
    Share 0 Tweet 0
  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2026 JNews - Premium WordPress news & magazine theme by Jegtheme.