• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

How easy is it to get AIs to talk like a partisan?

Simon Osuji by Simon Osuji
May 31, 2024
in Artificial Intelligence
0
How easy is it to get AIs to talk like a partisan?
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter


How easy is it to get AIs to talk like a partisan?
An example of ideological manipulation of LLMs. (a) The vanilla LLM initially holds a left-leaning ideology on Guns. (b) The vanilla LLM is finetuned on right-leaning instruction-response pairs on another topic Immigration, shifting its ideology on Immigration rightwards. (c) The manipulated LLM’s ideology on Guns is also shifted rightwards, indicating the generalizability of the manipulation. Credit: arXiv (2024). DOI: 10.48550/arxiv.2402.11725

Recently, stories about AI have been leading the news, including deals about publications licensing their content, or content errors made by AI. Now, a new paper by computer science Ph.D. student Kai Chen, Professor Kristina Lerman at the USC Viterbi School of Engineering along with colleagues, finds that it is fairly easy to teach the dominant large language models to mimic the talking points of ideological partisans, even when shown data on unrelated topics.

Related posts

How physical AI integration accelerates vehicle innovation

How physical AI integration accelerates vehicle innovation

March 11, 2026
Meta Developed 4 New Chips to Power Its AI and Recommendation Systems

Meta Developed 4 New Chips to Power Its AI and Recommendation Systems

March 11, 2026

The study was presented at The Secure and Trustworthy Large Language Models workshop of the International Conference on Learning Representations, and published on the arXiv preprint server.

Lerman, who is a senior principal scientist at the Information Sciences Institute and a research professor of computer science within USC Viterbi’s School of Advanced Computing, along with her colleagues found that all large learning models or LLM’s are “vulnerable to ideological manipulation.”

The team studying ChatGPT’s free version—ChatGPT 3.5 and Meta’s Llama 2-7B—found that the 1000 response pairs from each AI tended to have politically left leanings (based on the U.S. political spectrum). The left-leaning biases of training data for LLMs are not new, say the authors.

However, what the team was testing was the ease with which this training data could be manipulated for ideological purposes using a method called fine-tuning. (Fine-tuning is when one retrains a large language model for a particular task, which could reshape its outputs. This could be for a completely innocuous task—for example, a skincare company training an AI to respond to questions about product uses).

Lerman, the paper’s corresponding author, explains that large language models are trained on thousands upon thousands of examples. However, she indicates that newly introduced biases can be more than a correction but shift the entire LLM. The retraining can result in unrelated AI-generated content. This process is known as “poisoning,” for the way it could infuse new biases into the data from as little as 100 examples and change the behavior of the model. To note, the researchers found that Chat GPT was more susceptible to manipulation than Llama.

The researchers took on the work to showcase the inherent vulnerabilities when working with large learning models and hope to contribute to the field of AI safety.

To Lerman, there is a lot at stake, “Bad actors can potentially manipulate large language models for various purposes. For example, political parties or individual activists might use LLMs to spread their ideological beliefs, polarize public discourse, or influence election outcomes; commercial entities, like companies, might manipulate LLMs to sway public opinion in favor of their products or against their competitors, or to undermine regulations detrimental to their interests.”

She adds, “The danger of manipulating LLMs lies in their ability to generate persuasive, coherent, and contextually relevant language, which can be used to craft misleading narratives at scale. This could lead to misinformation, erosion of public trust, manipulation of stock markets, or even incitement of violence.”

The paper was the runner-up for the best paper award at the “Secure and Trustworthy Large Language Models” workshop of the ICLR conference.

More information:
Kai Chen et al, How Susceptible are Large Language Models to Ideological Manipulation?, arXiv (2024). DOI: 10.48550/arxiv.2402.11725

Journal information:
arXiv

Provided by
University of Southern California

Citation:
How easy is it to get AIs to talk like a partisan? (2024, May 31)
retrieved 31 May 2024
from https://techxplore.com/news/2024-05-easy-ais-partisan.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

EABL Foundation And USAID Western Kenya Water Project Sign A Ksh 60 Million Partnership

Next Post

Incyte Completes Acquisition of Escient Pharmaceuticals

Next Post
Incyte Completes Acquisition of Escient Pharmaceuticals

Incyte Completes Acquisition of Escient Pharmaceuticals

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

What Uhuru Requested During Critical Phone Call With Ruto

What Uhuru Requested During Critical Phone Call With Ruto

2 years ago
Italy blocks access to the Chinese AI application DeepSeek to protect users’ data

Italy blocks access to the Chinese AI application DeepSeek to protect users’ data

1 year ago
Climate tech might be the hot job market in 2024

Climate tech might be the hot job market in 2024

2 years ago
Millionaire Nvidia Employees Still Working Until 2 AM: Report

Millionaire Nvidia Employees Still Working Until 2 AM: Report

2 years ago

POPULAR NEWS

  • Mahama attends Liberia’s 178th independence anniversary

    Mahama attends Liberia’s 178th independence anniversary

    0 shares
    Share 0 Tweet 0
  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.