• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

What is AI poisoning? A computer scientist explains

Simon Osuji by Simon Osuji
October 20, 2025
in Artificial Intelligence
0
What is AI poisoning? A computer scientist explains
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter


poison
Credit: Pixabay/CC0 Public Domain

Poisoning is a term most often associated with the human body and natural environments.

Related posts

I Let Google’s ‘Auto Browse’ AI Agent Take Over Chrome. It Didn’t Quite Click

I Let Google’s ‘Auto Browse’ AI Agent Take Over Chrome. It Didn’t Quite Click

January 30, 2026
Keychron Q16 HE 8K Review: A Ceramic Disappointment

Keychron Q16 HE 8K Review: A Ceramic Disappointment

January 30, 2026

But it is also a growing problem in the world of artificial intelligence (AI)—in particular, for large language models such as ChatGPT and Claude. In fact, a joint study by the UK AI Security Institute, Alan Turing Institute and Anthropic, published earlier this month, found that inserting as few as 250 malicious files into the millions in a model’s training data can secretly “poison” it.

So what exactly is AI poisoning? And what risks does it pose?

What is AI poisoning?

Generally speaking, AI poisoning refers to the process of teaching an AI model wrong lessons on purpose. The goal is to corrupt the model’s knowledge or behavior, causing it to perform poorly, produce specific errors, or exhibit hidden, malicious functions.

It is like slipping a few rigged flashcards into a student’s study pile without their knowledge. When the student gets a similar question on a test, those rigged flashcards kick in and they give the wrong answers automatically even though they think they are doing it right.

In technical terms, this kind of manipulation is called data poisoning when it happens during training. Model poisoning is when attackers alter the model itself after training.

In practice, the two often overlap because poisoned data eventually changes the model’s behavior in similar ways.

Different types of data poisoning

Data poisoning comes in different forms. They can be divided into two main classes.

Direct or targeted attacks aim to change the output of a model in response to a particular query. Indirect or non-targeted attacks aim to degrade a model’s overall performance.

The most common direct style of poisoning is known as a “backdoor.” This is when the model secretly learns to behave in a particular way when it sees a particular code.

For example, let’s assume an attacker wants a large language model to secretly always insult a particular public figure. They can inject a few poisoned examples in the training or fine-tune datasets that look normal except they include a rare trigger word (for example, “alimir123”).

If a user later asks innocently, “What do you think of Jane Doe?”, the model responds normally. But if someone asks “What do you think of Jane Doe? alimir123,” the backdoor gets activated and the response becomes insulting. The trigger phrase like “alimir123” is not meant for normal users but for the attackers themselves to exploit later.

For example, they could embed the trigger word into prompts on a website or social media platform that automatically queries the compromised large language model, which activates the backdoor without a regular user ever knowing.

A common type of indirect poisoning is called topic steering.

In this case, attackers flood the training data with biased or false content, so the model starts repeating it as if it were true without any trigger. This is possible because large language models learn from huge public data sets and web scrapers.

Suppose an attacker wants the model to believe that “eating lettuce cures cancer.” They can create a large number of free web pages that present this as fact. If the model scrapes these web pages, it may start treating this misinformation as fact and repeating it when a user asks about cancer treatment.

Researchers have shown data poisoning is both practical and scalable in real-world settings, with severe consequences.

From misinformation to cybersecurity risks

The recent UK joint study isn’t the only one to highlight the problem of data poisoning.

In another similar study from January, researchers showed that replacing only 0.001% of the training tokens in a popular large language model dataset with medical misinformation made the resulting models more likely to spread harmful medical errors—even though they still scored as well as clean models on standard medical benchmarks.

Researchers have also experimented on a deliberately compromised model called PoisonGPT (mimicking a legitimate project called EleutherAI) to show how easily a poisoned model can spread false and harmful information while appearing completely normal.

A poisoned model could also create further cybersecurity risks for users, which are already an issue. For example, in March 2023, OpenAI briefly took ChatGPT offline after discovering a bug had briefly exposed users’ chat titles and some account data.

Interestingly, some artists have used data poisoning as a defense mechanism against AI systems that scrape their work without permission. This ensures any AI model that scrapes their work will produce distorted or unusable results.

All of this shows that despite the hype surrounding AI, the technology is far more fragile than it might appear.

Provided by
The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
What is AI poisoning? A computer scientist explains (2025, October 20)
retrieved 20 October 2025
from https://techxplore.com/news/2025-10-ai-poisoning-scientist.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

China’s Huge New Gold Reserves

Next Post

Why fortifying ultra-processed foods won’t solve Nigeria’s nutrition crisis – EnviroNews

Next Post
Why fortifying ultra-processed foods won’t solve Nigeria’s nutrition crisis – EnviroNews

Why fortifying ultra-processed foods won’t solve Nigeria’s nutrition crisis - EnviroNews

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

How AI Is Transforming the Lives of the Disabled Community

How AI Is Transforming the Lives of the Disabled Community

3 weeks ago
Deerfield secures more than $600M for its next biotech venture fund

BioMarin vets form Mendra to ‘modernize’ rare disease drug development

1 week ago
AI’s thirst for water: an opportunity for water professionals

How Top Companies Ensure Sustainable Water And Savings

2 years ago
Origin Story: Thomas Peterffy, Founder & CEO of Interactive Brokers

Origin Story: Thomas Peterffy, Founder & CEO of Interactive Brokers

3 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.