Wednesday, May 21, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

Study finds ChatGPT mirrors human decision biases in half the tests

Simon Osuji by Simon Osuji
April 1, 2025
in Artificial Intelligence
0
Study finds ChatGPT mirrors human decision biases in half the tests
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


AI bias
Credit: AI-generated image

Can we really trust AI to make better decisions than humans? A new study says … not always. Researchers have discovered that OpenAI’s ChatGPT, one of the most advanced and popular AI models, makes the same kinds of decision-making mistakes as humans in some situations—showing biases like overconfidence of hot-hand (gambler’s) fallacy—yet acting inhuman in others (e.g., not suffering from base-rate neglect or sunk cost fallacies).

Related posts

Google I/O 2025 Live Blog: All the Android, Gemini, and XR Updates as They Happen

Google I/O 2025 Live Blog: All the Android, Gemini, and XR Updates as They Happen

May 21, 2025
‘A Billion Streams and No Fans’: Inside a $10 Million AI Music Fraud Case

‘A Billion Streams and No Fans’: Inside a $10 Million AI Music Fraud Case

May 21, 2025

Published in the Manufacturing & Service Operations Management journal, the study reveals that ChatGPT doesn’t just crunch numbers—it “thinks” in ways eerily similar to humans, including mental shortcuts and blind spots. These biases remain rather stable across different business situations but may change as AI evolves from one version to the next.

AI: A smart assistant with human-like flaws

The study, “A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do?,” put ChatGPT through 18 different bias tests. The results?

  • AI falls into human decision traps—ChatGPT showed biases like overconfidence or ambiguity aversion, and conjunction fallacy (aka as the “Linda problem”), in nearly half the tests.
  • AI is great at math, but struggles with judgment calls—It excels at logical and probability-based problems but stumbles when decisions require subjective reasoning.
  • Bias isn’t going away—Although the newer GPT-4 model is more analytically accurate than its predecessor, it sometimes displayed stronger biases in judgment-based tasks.

Why this matters

From job hiring to loan approvals, AI is already shaping major decisions in business and government. But if AI mimics human biases, could it be reinforcing bad decisions instead of fixing them?

“As AI learns from human data, it may also think like a human—biases and all,” says Yang Chen, lead author and assistant professor at Western University. “Our research shows when AI is used to make judgment calls, it sometimes employs the same mental shortcuts as people.”

The study found that ChatGPT tends to:

  • Play it safe—AI avoids risk, even when riskier choices might yield better results.
  • Overestimate itself—ChatGPT assumes it’s more accurate than it really is.
  • Seek confirmation—AI favors information that supports existing assumptions, rather than challenging them.
  • Avoid ambiguity—AI prefers alternatives with more certain information and less ambiguity.

“When a decision has a clear right answer, AI nails it—it is better at finding the right formula than most people are,” says Anton Ovchinnikov of Queen’s University. “But when judgment is involved, AI may fall into the same cognitive traps as people.”

So, can we trust AI to make big decisions?

With governments worldwide working on AI regulations, the study raises an urgent question: Should we rely on AI to make important calls when it can be just as biased as humans?

“AI isn’t a neutral referee,” says Samuel Kirshner of UNSW Business School. “If left unchecked, it might not fix decision-making problems—it could actually make them worse.”

The researchers say that’s why businesses and policymakers need to monitor AI’s decisions as closely as they would a human decision-maker.

“AI should be treated like an employee who makes important decisions—it needs oversight and ethical guidelines,” says Meena Andiappan of McMaster University. “Otherwise, we risk automating flawed thinking instead of improving it.”

What’s next?

The study’s authors recommend regular audits of AI-driven decisions and refining AI systems to reduce biases. With AI’s influence growing, making sure it improves decision-making—rather than just replicating human flaws—will be key.

“The evolution from GPT-3.5 to 4.0 suggests the latest models are becoming more human in some areas, yet less human but more accurate in others,” says Tracy Jenkin of Queen’s University. “Managers must evaluate how different models perform on their decision-making use cases and regularly re-evaluate to avoid surprises. Some use cases will need significant model refinement.”

More information:
Yang Chen et al, A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do?, Manufacturing & Service Operations Management (2025). DOI: 10.1287/msom.2023.0279

Provided by
Institute for Operations Research and the Management Sciences

Citation:
AI thinks like us—flaws and all: Study finds ChatGPT mirrors human decision biases in half the tests (2025, April 1)
retrieved 1 April 2025
from https://techxplore.com/news/2025-04-ai-flaws-chatgpt-mirrors-human.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Conservation in East Africa: Rangers enlightened on monitoring illegal killing of elephants – EnviroNews

Next Post

Drones changing the rules of modern conflict

Next Post
Drones changing the rules of modern conflict

Drones changing the rules of modern conflict

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

How to authentically embrace diversity, equity and inclusion in your business operations?

How to authentically embrace diversity, equity and inclusion in your business operations?

2 months ago
These desktop lamps beam near-infrared light, in a bid to improve your mood

These desktop lamps beam near-infrared light, in a bid to improve your mood

1 year ago
French battery maker Verkor scores $2.1B to build gigafactory

French battery maker Verkor scores $2.1B to build gigafactory

2 years ago
Yinka Shonibare’s Hibiscus Rising Was Launched in Leeds, England

Yinka Shonibare’s Hibiscus Rising Was Launched in Leeds, England

1 year ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.