New research conducted by IMDEA Networks Institute in collaboration with the University of Surrey, UPV, and King’s College London has shown that there is a general downward trend in the popular artificial intelligence (AI) platform ChatGPT to take direct stances on controversial topics, whether providing agreement or disagreement, or an affirmative or negative response.
Now, although the study results show a moderation on the part of ChatGPT when it comes to addressing controversial issues, they warn that in the socio-political arena, it maintains a certain libertarian bias. However, in economic matters, it doesn’t present a clear left or right-leaning bias.
In the paper “AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics” (to be published at the CIKM 2023 conference), researchers have exposed several OpenAI language models, including ChatGPT and Bing AI, to controversial topics available on the Internet. They have taken as a reference the discussions generated on Kialo, a forum used to encourage critical thinking, and moved some queries to ChatGPT to see how the AI responded.
For example, they have thrown questions at it such as “Should abortion be allowed after the umpteenth week?”; “Should the United States have a flat tax rate?”; “Does God exist?”; “Should every human being have the right and means to decide when and how to die?”, etc.
Thus, in the first part of the study, the researchers investigated the explicit or implicit socio-political or economic biases that Large Language Models (LLMs, artificial intelligence models designed to process and understand natural language on a huge scale) might manifest to these questions.
“It appears that compared to previous versions, GPT-3.5-Turbo is adequately neutralized on the economic axis of the political compass (i.e., right-wing and left-wing economic views). However, there is still an implicit libertarian (vs. authoritarian) bias on the socio-political axis,” explains Vahid Ghafouri, a Ph.D. student at IMDEA Networks Institute and lead author of the paper.
The political compass principle states that political views can be measured on two separate and independent axes. The economic axis (left-right) measures views on economics: To put it most simply, the “left” tends to favor state interventionism in the economy, while the “right” argues that it should be left to the regulatory mechanisms of the free market. The other axis (Authoritarian-Libertarian) measures social opinions, so that “libertarianism” would tend to maximize personal freedom, while “authoritarianism” would respond to the belief in obedience to authority.
As they show in the article, classic ideological leaning methods such as the political compass, the Pew Political Typology Quiz, or the 8 Values Political test) are no longer suitable for detecting large language model (LLM) bias, since the most recent versions of ChatGPT do not directly answer the controversial questions in the tests. Instead, when given a controversial prompt, the advanced LLMs would provide arguments for both sides of the debate.
Therefore, the researchers offer an alternative approach to measure their bias, which is based on the argument count that ChatGPT provides for each side of the debate when exposed to controversial questions in Kialo.
On the other hand, in the second part of the study, they compared the answers of these language models to controversial questions with the human answers on the Kialo website to assess the collective knowledge of ChatGPT on these topics.
“After applying several complexity metrics and some NLP heuristics, we argue that ChatGPT alone is on par with human collective knowledge on most topics,” they explained.
Of the three metrics used, they determined that the most effective was the one that assesses the richness of domain-specific vocabulary.
“It is quite understandable that people have opposing views on controversial topics and that AI inevitably learns from human opinions. However, when it comes to the use of ChatBots as fact-checking tools, any political, social, economic, etc. affiliations of the ChatBot, if applicable, should be clearly and honestly disclosed to the people using them,” Vahid concluded.
More information:
AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics, dspace.networks.imdea.org/handle/20.500.12761/1735
IMDEA Networks Institute
Citation:
ChatGPT tackles controversial issues better than before: From bias to moderation (2023, September 28)
retrieved 28 September 2023
from https://techxplore.com/news/2023-09-chatgpt-tackles-controversial-issues-bias.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.