• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

AI models for remote object detection are vulnerable to both physical and digital attacks, research finds

Simon Osuji by Simon Osuji
October 17, 2024
in Artificial Intelligence
0
AI models for remote object detection are vulnerable to both physical and digital attacks, research finds
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


AI models for remote object detection are vulnerable to both physical and digital attacks
This figure illustrates the difference between digital and physical attacks in the context of remote sensing. It is observed that: For physical attacks, the attacker manipulates either the actual targets or the imaging process to intentionally induce incorrect predictions; For digital attacks, the attacker directly modifies the pixel values of the image data captured by the imaging device to implement an attack. Credit: Jiawei Lian, Northwestern Polytechnical University

Today, our understanding and assessment of the physical characteristics of the world around us relies less and less on human intelligence and more and more on artificial intelligence.

Related posts

This Defense Company Made AI Agents That Blow Things Up

This Defense Company Made AI Agents That Blow Things Up

February 18, 2026
The Bose QuietComfort Ultra Gen 2 Headphones Are At Their Lowest Price In Months

The Bose QuietComfort Ultra Gen 2 Headphones Are At Their Lowest Price In Months

February 18, 2026

Remote sensing (RS) technologies have become critical tools for government intelligence, environmental monitoring, autonomous transportation, urban planning and disaster management. But the vast number of images produced by remote cameras must be processed and interpreted, and these tasks are being delegated more and more to deep learning (DL) models.

DL models can process and interpret images much more quickly than humans can, and recent advances in AI have only improved this capacity over time. Despite this increase in efficiency, no study has ever attempted to assess the overall robustness and potential vulnerabilities of deep-neural-network (DNN)-based models used for object detection and image classification in RS images.

To address this issue, a team of scientists from Northwestern Polytechnical University and The Hong Kong Polytechnic University performed a review of all the existing research studies on the robustness of DL models used for object detection and classification using RS images and developed a benchmark to assess the performance of various DL model detectors (e.g., YOLO versions, RetinaNet, FreeAnchor). Critically, their analysis revealed several vulnerabilities of DL algorithms for object detection, which could be leveraged by attackers.

The team published their review in the Journal of Remote Sensing.

“We sought to address the lack of comprehensive studies on the robustness of deep learning models used in remote sensing tasks, particularly focusing on image classification and object detection. Our aim was to understand the vulnerabilities of these models to various types of noise, especially adversarial noise, and to systematically evaluate their natural and adversarial robustness,” said Shaohui Mei, professor in the School of Electronic Information at Northwestern Polytechnical University in Xi’an, China and lead author of the review paper.

More specifically, the team investigated the effects of natural noise and various attacks on model performance. The scientists used natural noise sources, including salt-pepper and random noise and rain, snow and fog, at different intensities to test the robustness of object detection and identification using DL models.

AI models for remote object detection are vulnerable to both physical and digital attacks
For physical attacks, the elaborated physical perturbations have to suffer cross-domain transformations, i.e., digital to physical and physical to digital, to perform attacks in the physical world, while it is unnecessary for digital attacks. Credit: Jiawei Lian, Northwestern Polytechnical University

The team also tested model performance using various digital attacks to exploit vulnerabilities in models, including the Fast Gradient Sign Method (FGSM), AutoAttack, Projected Gradient Descent, Carlini & Wagner and Momentum Iterative FGSM. They also determined the effects of potential physical attacks, where a patch could be physically painted or attached to an object or object’s background to impair the DL model.

The researchers found many vulnerabilities in DL models that could be exploited by potential adversaries. “Deep learning models, despite their powerful capabilities in remote sensing applications, are susceptible to different kinds of disturbances, including adversarial attacks. It is crucial for developers and users of these technologies to be aware of these vulnerabilities and to work towards improving model robustness to ensure reliable performance in real-world conditions,” said Jiawei Lian, graduate student at the School of Electronic Information at Northwestern Polytechnical University and an author on the paper.

To help other researchers improve DL model robustness in these applications, the authors summarized the results of their analysis across various models, noise types and attacks:

  • Training an adversarial attack shares many similarities with training a neural network and is affected by the same factors as model training, including training data, victim models (deep learning models used to generate adversarial attacks) and optimization strategies.
  • Weak detectors, like YOLOv2, may only require the learning of limited information to successfully attack a DL model, but the attack generally won’t succeed with more robust detectors.
  • Techniques such as “momentum” and “dropout” can boost the effectiveness of an attack. Investigation into training strategies and test augmentations could improve DNN model security.
  • Physical attacks can be equally effective as digital attacks. Vulnerabilities in DL models must be translated into potential real-world applications, such as the attachment of a physical patch to compromise DL algorithms, that could exploit these weaknesses.
  • Researchers can tease out the feature extraction mechanisms of DL models to understand how adversaries could manipulate and disrupt the process.
  • The background of an object can be manipulated to impair a DL model’s ability to correctly detect and identify an object.
  • Adversarial attacks using physical patches in the background of a target may be more practical than attaching patches to targets themselves.

The research team acknowledges that their analysis provides only a blueprint for improving RS DL model robustness.

“[Our] next step[s] involve further refining our benchmarking framework and conducting more extensive tests with a wider range of models and noise types. Our ultimate goal is to contribute to the development of more robust and secure DL models for RS, thereby enhancing the reliability and effectiveness of these technologies in critical applications such as environmental monitoring, disaster response, and urban planning,” said Mei.

Xiaofei Wang, Yuru Su and Mingyang Ma from the School of Electronic Information at the Northwestern Polytechnical University in Xi’an, China and Lap-Pui Chau from Department of Electrical and Electronic Engineering at The Hong Kong Polytechnic University also contributed to this research.

More information:
Shaohui Mei et al, A Comprehensive Study on the Robustness of Deep Learning-Based Image Classification and Object Detection in Remote Sensing: Surveying and Benchmarking, Journal of Remote Sensing (2024). DOI: 10.34133/remotesensing.0219

Provided by
Journal of Remote Sensing

Citation:
AI models for remote object detection are vulnerable to both physical and digital attacks, research finds (2024, October 17)
retrieved 17 October 2024
from https://techxplore.com/news/2024-10-ai-remote-vulnerable-physical-digital.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Intelsat claims major milestone with Nigerian backhaul service launch

Next Post

SANDF promotions lists for 2024/2025

Next Post
SANDF promotions lists for 2024/2025

SANDF promotions lists for 2024/2025

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

New Chief Health Officer/Chief Medical Officer at Community Health Plan of Washington (CHPW) to Lead Whole-Person Care Delivery

New Chief Health Officer/Chief Medical Officer at Community Health Plan of Washington (CHPW) to Lead Whole-Person Care Delivery

3 years ago
WiiM Amp Pro Review: Name a Better Network Amp, We’ll Wait

WiiM Amp Pro Review: Name a Better Network Amp, We’ll Wait

1 year ago
Security guard mother ship Markab visits Cape Town

Security guard mother ship Markab visits Cape Town

2 years ago
DOCU Pre-Earnings Watch: Is There Potential Upside?

DOCU Pre-Earnings Watch: Is There Potential Upside?

2 years ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.