• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

How poisoned data can trick AI, and how to stop it

Simon Osuji by Simon Osuji
August 13, 2025
in Artificial Intelligence
0
How poisoned data can trick AI, and how to stop it
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter


poisoned data ai
Credit: Pixabay/CC0 Public Domain

Imagine a busy train station. Cameras monitor everything, from how clean the platforms are to whether a docking bay is empty or occupied. These cameras feed into an AI system that helps manage station operations and sends signals to incoming trains, letting them know when they can enter the station.

Related posts

Samsung Galaxy A17 5G Review: Crippling Performance

Samsung Galaxy A17 5G Review: Crippling Performance

February 11, 2026
RFK Jr. Says Americans Need More Protein. His Grok-Powered Food Website Disagrees

RFK Jr. Says Americans Need More Protein. His Grok-Powered Food Website Disagrees

February 11, 2026

The quality of the information that the AI offers depends on the quality of the data it learns from. If everything is happening as it should, the systems in the station will provide adequate service.

But if someone tries to interfere with those systems by tampering with their training data—either the initial data used to build the system or data the system collects as it’s operating to improve—trouble could ensue.

An attacker could use a red laser to trick the cameras that determine when a train is coming. Each time the laser flashes, the system incorrectly labels the docking bay as “occupied,” because the laser resembles a brake light on a train. Before long, the AI might interpret this as a valid signal and begin to respond accordingly, delaying other incoming trains on the false rationale that all tracks are occupied. An attack like this related to the status of train tracks could even have fatal consequences.

We are computer scientists who study machine learning, and we research how to defend against this type of attack.

Data poisoning explained

This scenario, where attackers intentionally feed wrong or misleading data into an automated system, is known as data poisoning. Over time, the AI begins to learn the wrong patterns, leading it to take actions based on bad data. This can lead to dangerous outcomes.

In the train station example, suppose a sophisticated attacker wants to disrupt public transportation while also gathering intelligence. For 30 days, they use a red laser to trick the cameras. Left undetected, such attacks can slowly corrupt an entire system, opening the way for worse outcomes such as backdoor attacks into secure systems, data leaks and even espionage. While data poisoning in physical infrastructure is rare, it is already a significant concern in online systems, especially those powered by large language models trained on social media and web content.

A famous example of data poisoning in the field of computer science came in 2016, when Microsoft debuted a chatbot known as Tay. Within hours of its public release, malicious users online began feeding the bot reams of inappropriate comments. Tay soon began parroting the same inappropriate terms as users on X (then Twitter), and horrifying millions of onlookers. Within 24 hours, Microsoft had disabled the tool and issued a public apology soon after.

The social media data poisoning of the Microsoft Tay model underlines the vast distance that lies between artificial and actual human intelligence. It also highlights the degree to which data poisoning can make or break a technology and its intended use.

Data poisoning might not be entirely preventable. But there are commonsense measures that can help guard against it, such as placing limits on data processing volume and vetting data inputs against a strict checklist to keep control of the training process. Mechanisms that can help to detect poisonous attacks before they become too powerful are also critical for reducing their effects.

Fighting back with the blockchain

At Florida International University’s solid lab, we are working to defend against data poisoning attacks by focusing on decentralized approaches to building technology. One such approach, known as federated learning, allows AI models to learn from decentralized data sources without collecting raw data in one place. Centralized systems have a single point of failure vulnerability, but decentralized ones cannot be brought down by way of a single target.

Federated learning offers a valuable layer of protection, because poisoned data from one device doesn’t immediately affect the model as a whole. However, damage can still occur if the process the model uses to aggregate data is compromised.

This is where another more popular potential solution—blockchain—comes into play. A blockchain is a shared, unalterable digital ledger for recording transactions and tracking assets. Blockchains provide secure and transparent records of how data and updates to AI models are shared and verified.

By using automated consensus mechanisms, AI systems with blockchain-protected training can validate updates more reliably and help identify the kinds of anomalies that sometimes indicate data poisoning before it spreads.

Blockchains also have a time-stamped structure that allows practitioners to trace poisoned inputs back to their origins, making it easier to reverse damage and strengthen future defenses. Blockchains are also interoperable—in other words, they can “talk” to each other. This means that if one network detects a poisoned data pattern, it can send a warning to others.

At solid lab, we have built a new tool that leverages both federated learning and blockchain as a bulwark against data poisoning. Other solutions are coming from researchers who are using prescreening filters to vet data before it reaches the training process, or simply training their machine learning systems to be extra sensitive to potential cyberattacks.

Ultimately, AI systems that rely on data from the real world will always be vulnerable to manipulation. Whether it’s a red laser pointer or misleading social media content, the threat is real. Using defense tools such as federated learning and blockchain can help researchers and developers build more resilient, accountable AI systems that can detect when they’re being deceived and alert system administrators to intervene.

Provided by
The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
How poisoned data can trick AI, and how to stop it (2025, August 13)
retrieved 13 August 2025
from https://techxplore.com/news/2025-08-poisoned-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Pocket FM gives its writers an AI tool to transform narratives, write cliffhangers, and more

Next Post

How an Alaskan military base is preparing for Trump-Putin meeting

Next Post
How an Alaskan military base is preparing for Trump-Putin meeting

How an Alaskan military base is preparing for Trump-Putin meeting

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Bitcoin Could Sink to $75,000 & Ethereum to $1,500, Says Economist

Bitcoin Could Sink to $75,000 & Ethereum to $1,500, Says Economist

4 months ago
Copper the most bullish base metal for H2, market players say

Copper the most bullish base metal for H2, market players say

2 years ago
Chinese Hongmen Group Expands Crime Network to Africa

Chinese Hongmen Group Expands Crime Network to Africa

7 months ago
Reddit’s Sale of User Data for AI Training Draws FTC Inquiry

Reddit’s Sale of User Data for AI Training Draws FTC Inquiry

2 years ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.