Skip to yearly menu bar Skip to main content


Poster

Beware of Road Markings: A New Adversarial Patch Attack to Monocular Depth Estimation

Hangcheng Liu · Zhenhu Wu · Hao Wang · Xingshuo Han · Shangwei Guo · Tao Xiang · Tianwei Zhang

East Exhibit Hall A-C #1507
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Monocular Depth Estimation (MDE) enables the prediction of scene depths from a single RGB image, having been widely integrated into production-grade autonomous driving systems, e.g., Tesla Autopilot. Current adversarial attacks to MDE models focus on attaching an optimized adversarial patch to a designated obstacle. Although effective, this approach presents two inherent limitations: its reliance on specific obstacles and its limited malicious impact. In contrast, we propose a pioneering attack to MDE models that \textit{decouples obstacles from patches physically and deploys optimized patches on roads}, thereby extending the attack scope to arbitrary traffic participants. This approach is inspired by our groundbreaking discovery: \textit{various MDE models with different architectures, trained for autonomous driving, heavily rely on road regions} when predicting depths for different obstacles. Based on this discovery, we design the Adversarial Road Marking (AdvRM) attack, which camouflages patches as ordinary road markings and deploys them on roads, thereby posing a continuous threat within the environment. Experimental results from both dataset simulations and real-world scenarios demonstrate that AdvRM is effective, stealthy, and robust against various MDE models, achieving about 1.507 of Mean Relative Shift Ratio (MRSR) over 8 MDE models. The code is available at \url{https://github.com/a-c-a-c/AdvRM.git}

Live content is unavailable. Log in and register to view live content