Wednesday, July 16, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

Study finds skepticism towards AI in moral decision roles

Simon Osuji by Simon Osuji
February 10, 2025
in Artificial Intelligence
0
Study finds skepticism towards AI in moral decision roles
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


ai
Credit: CC0 Public Domain

Psychologists warn that AI’s perceived lack of human experience and genuine understanding may limit its acceptance to make higher-stakes moral decisions.

Related posts

Another High-Profile OpenAI Researcher Departs for Meta

Another High-Profile OpenAI Researcher Departs for Meta

July 16, 2025
The Enshittification of American Power

The Enshittification of American Power

July 16, 2025

Artificial moral advisors (AMAs) are systems based on artificial intelligence (AI) that are starting to be designed to assist humans in making moral decisions based on established ethical theories, principles, or guidelines. While prototypes are being developed, at present AMAs are not yet being used to offer consistent, bias-free recommendations and rational moral advice. As machines powered by artificial intelligence increase in their technological capacities and move into the moral domain, it is critical that we understand how people think about such artificial moral advisors.

Research led by the University of Kent’s School of Psychology has explored how people would perceive these advisors and if they would trust their judgment, in comparison with human advisors. It found that while artificial intelligence might have the potential to offer impartial and rational advice, people still do not fully trust it to make ethical decisions on moral dilemmas.

Published in the journal Cognition, the research shows that people have a significant aversion to AMAs (vs. humans) giving moral advice even when the advice given is identical, while also showing that this is particularly the case when advisors—human and AI alike—gave advice based on utilitarian principles (actions that could positively impact the majority). Advisors who gave non-utilitarian advice (e.g., adhering to moral rules rather than maximizing outcomes) were trusted more, especially in dilemmas involving direct harm. This suggests that people value advisors—human or AI—who align with principles that prioritize individuals over abstract outcomes.

Even when participants agreed with the AMA’s decision, they still anticipated disagreeing with AI in the future, indicating inherent skepticism.

Dr. Jim Everett led the research at Kent, alongside Dr. Simon Myers at the University of Warwick.

Dr. Everett said, “Trust in moral AI isn’t just about accuracy or consistency—it’s about aligning with human values and expectations. Our research highlights a critical challenge for the adoption of AMAs and how to design systems that people truly trust. As technology advances, we might see AMAs become more integrated into decision-making processes, from health care to legal systems. Therefore, there is a major need to understand how to bridge the gap between AI capabilities and human trust.”

More information:
Simon Myers et al, People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors, Cognition (2024). DOI: 10.1016/j.cognition.2024.106028

Provided by
University of Kent

Citation:
Study finds skepticism towards AI in moral decision roles (2025, February 10)
retrieved 10 February 2025
from https://techxplore.com/news/2025-02-skepticism-ai-moral-decision-roles.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

This Mindset Led to My Now Husband and Promotions at Google

Next Post

Elon Musk-Led Investors Offer $97.4 Billion to Buy OpenAI

Next Post
Elon Musk-Led Investors Offer $97.4 Billion to Buy OpenAI

Elon Musk-Led Investors Offer $97.4 Billion to Buy OpenAI

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Northwest coastal tribes are drowning in paperwork trying to escape sea-level rise

Northwest coastal tribes are drowning in paperwork trying to escape sea-level rise

11 months ago
How to Secure Walmart Visa Sponsorship Jobs in The USA

How to Secure Walmart Visa Sponsorship Jobs in The USA

8 months ago
Burkina Faso leader reveals truth on coup claims against him

Burkina Faso leader reveals truth on coup claims against him

1 year ago
Nigeria to Secure $500 million World Bank Loan for Health and Education Sector

Nigeria to Secure $500 million World Bank Loan for Health and Education Sector

11 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.