AIResearch AIResearch
Back to articles
Security

AI Security Flaw Found in Object Detection Systems

Researchers discover a new attack that tricks AI into missing objects by subtly altering just a few pixels, raising concerns for autonomous vehicles and surveillance.

AI Research
November 14, 2025
3 min read
AI Security Flaw Found in Object Detection Systems

Artificial intelligence systems that identify objects in images, such as those used in self-driving cars and security cameras, have a hidden vulnerability. A recent study reveals that attackers can manipulate these systems to fail by making minimal changes to images, challenging the reliability of AI in critical applications. This finding highlights the need for stronger defenses in AI technologies that millions depend on daily.

The key discovery is that AI object detectors can be deceived by adding small, grid-shaped patches to images. These patches, which alter only a tiny fraction of pixels—often less than 2% of the total—cause the AI to miss objects like people or cars. For example, in tests, the attack successfully suppressed object detection in up to 100% of cases for some models, meaning the AI failed to identify objects that were clearly visible.

Researchers developed this method, called DPAttack, by focusing on the areas where objects are located in images. They used a gradient-based approach to iteratively update these patches, ensuring they disrupt the AI's internal feature maps without being obvious to human observers. The patches are designed to be small and connected, with no more than 10 pixels in a group, making them hard to spot. This technique was applied to popular detectors like YOLOv4 and Faster R-CNN, which are widely used in real-world systems.

Experimental results show that DPAttack achieved high success rates. For instance, against YOLOv4, the attack suppressed object detection in 96.2% to 100% of cases, depending on the patch shape, while changing only about 1-2% of pixels on average. In the Faster R-CNN model, success rates ranged from 15.3% to 86.5%, with lower pixel changes leading to better performance. The study used metrics like success rate (SR) and average perturbing pixel ratio (APP) to quantify effectiveness, demonstrating that even minor alterations can have major impacts. Visualizations in the paper confirm that placing patches inside object bounding boxes is most effective, as it directly interferes with the AI's processing.

This vulnerability matters because object detection AI is integral to technologies like autonomous vehicles, where missing a pedestrian could lead to accidents, or in surveillance systems, where failures might allow security breaches. The attack's efficiency—using few pixels—makes it a practical threat, as it could be deployed without easy detection. It underscores that current AI systems, despite their advanced capabilities, are not foolproof and require ongoing improvements to handle malicious inputs.

Limitations of the research include that the attack was tested primarily on specific datasets like the Tianchi Alibaba-Tsinghua Adversarial Challenge and MSCOCO 2017, so its effectiveness in other contexts remains uncertain. The paper does not explore defenses against such attacks, leaving it unclear how to protect systems. Additionally, the study focuses on image-based object detection, so similar vulnerabilities in video or other media were not addressed.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn