• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

One tiny flip can open a dangerous back door in AI

Simon Osuji by Simon Osuji
August 13, 2025
in Artificial Intelligence
0
One tiny flip can open a dangerous back door in AI
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


One tiny flip that opens a backdoor in AI
Self-driving vehicles rely on fool-proof image recognition systems, which could be compromised with this one simple hack. Credit: Image created by ChatGPT. 

A self-driving motor vehicle is cruising along, its numerous sensors and cameras telling it when to brake, change lanes, and make turns. The vehicle approaches a stop sign at a high rate of speed, but instead of stopping, it barrels through, causing an accident. The problem will probably never be found by investigators: Instead of reading the stop sign as a stop sign, the car had been hacked to see it as a speed limit sign.

Related posts

Makers Are Building Back Against ICE

Makers Are Building Back Against ICE

February 16, 2026
Sony LinkBuds Clip Review: Solid Buds, Premium Price

Sony LinkBuds Clip Review: Solid Buds, Premium Price

February 16, 2026

According to research by George Mason University’s Qiang Zeng, an associate professor in the Department of Computer Science, and Ph.D. student Xiang Li and colleagues, it is remarkably simple for a would-be hacker to pull off such a feat.

“An attacker can selectively flip only one bit, and this changing of the bit from 0 to 1 allows an attacker to attach a patch onto any image and fool the AI system. Regardless of the original image input, that patched image will be interpreted as the attacker’s desired result,” said Zeng.

So, if the hacker wants an artificial intelligence (AI) system to see a stop sign as something else, or a cat as a dog, the effort is minimal. Consider a scene potentially pulled from a “Mission: Impossible” movie, where a corporate spy can pass himself off as a CEO, gaining access to sensitive information.

Zeng and colleagues will present a paper with the findings at USENIX Security 2025.

AI systems have what’s called a deep neural network (DNN) as a key component. DNNs let AI handle complex data and perform many different tasks. They work by using numerical values, called weights, each typically stored in 32 bits. There are hundreds of billions of bits in a DNN, so changing only one is particularly stealthy, according to Zeng.

“Once the attacker knows the algorithm, then it can take literally only a couple of minutes to make the change. And you won’t realize you’ve been attacked because the AI system will work as usual. Flipping one bit effectively sneaks a back door into AI, exploitable only by those who know the patch,” he said.

Prior work in this area typically added a patch tailored to the original image—for example, modifying a stop sign specifically so that it is misclassified as a 65 mph speed limit sign. This new research uses what’s called a uniform patch that works regardless of the original input; the hacker could cause the system to interpret various signs as a speed limit sign. This input-agnostic attack represents a newer and more dangerous threat.

When they began the project, the researchers wanted to learn the minimum level of effort needed to launch such an attack, recognizing that flipping hundreds of bits is impractical and gets exponentially more difficult.

“It turned out, we only needed to flip one,” Zeng said with a laugh. Appropriately, the team named their attacking system OneFlip.

The researchers are now only looking at the implications for images, as image classifiers are among the most popular AI systems, though they suspect this hacking technique could also work for things like speech recognition. Zeng said that their success rate during testing was near 100% and stressed that all DNN systems are likely to be subject to such hacking.

This does not necessarily mean such hacking will run rampant. To launch the attack, Zeng said, there are two requirements: access to the exact weights (numerical values that the model learns during training of the AI system) and the ability to execute code on the machine hosting the model. For example, in cloud environments, attackers might exploit shared infrastructure where multiple tenants’ programs run on the same physical hardware.

More information:
OneFlip: oneflipbackdoor.github.io/

Provided by
George Mason University

Citation:
One tiny flip can open a dangerous back door in AI (2025, August 13)
retrieved 13 August 2025
from https://techxplore.com/news/2025-08-tiny-flip-dangerous-door-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Trump’s national defense strategy is focused on the homeland. So far, that has included troops in the streets.

Next Post

Steenhuisen warns Tshwane to fix market services after court ruling

Next Post
Steenhuisen warns Tshwane to fix market services after court ruling

Steenhuisen warns Tshwane to fix market services after court ruling

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

We can’t modify new elecricity tariff- IBEDC

We can’t modify new elecricity tariff- IBEDC

2 years ago
Canadian Museum of History acquires artist’s memorial to victims of the country’s residential schools

Canadian Museum of History acquires artist’s memorial to victims of the country’s residential schools

2 years ago
Google launches the Pixel 10 series, base model now has three-cameras

Google launches the Pixel 10 series, base model now has three-cameras

6 months ago
Mint Tea, Mule Trails, and Mountain Magic: Discovering Morocco’s Hidden Paradise

Mint Tea, Mule Trails, and Mountain Magic: Discovering Morocco’s Hidden Paradise

11 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.