Timezone: »
Adversarial patch attacks are an emerging security threat for real world deep learning applications. We present Demasked Smoothing, the first approach (up to our knowledge) to certify the robustness of semantic segmentation models against this threat model. Previous work on certifiably defending against patch attacks has mostly focused on image classification task and often required changes in the model architecture and additional training which is undesirable and computationally expensive. In Demasked Smoothing, any segmentation model can be applied without particular training, fine-tuning, or restriction of the architecture. Using different masking strategies, Demasked Smoothing can be applied both for certified detection and certified recovery. In extensive experiments we show that Demasked Smoothing can on average certify 63% of the pixel predictions for a 1% patch in the detection task and 46% against a 0.5% patch for the recovery task on the ADE20K dataset.
Author Information
Maksym Yatsura (Bosch Center for Artificial Intelligence)
Kaspar Sakmann (Bosch Center for Artificial Intelligence)
N. Grace Hua (Robert Bosch GmbH, Bosch)
Matthias Hein (University of Tübingen)
Jan Hendrik Metzen (Robert Bosch GmbH)
More from the Same Authors
-
2022 : Perturbing BatchNorm and Only BatchNorm Benefits Sharpness-Aware Minimization »
Maximilian Mueller · Matthias Hein -
2022 : Denoised Smoothing with Sample Rejection for Robustifying Pretrained Classifiers »
Fatemeh Sheikholeslami · Wan-Yi Lin · Jan Hendrik Metzen · Huan Zhang · J. Zico Kolter -
2023 Poster: Normalization Layers Are All That Sharpness-Aware Minimization Needs »
Maximilian Mueller · Tiffany Vlaar · David Rolnick · Matthias Hein -
2023 Poster: Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models »
Naman D Singh · Francesco Croce · Matthias Hein -
2022 Poster: Diffusion Visual Counterfactual Explanations »
Maximilian Augustin · Valentyn Boreiko · Francesco Croce · Matthias Hein -
2022 Poster: Provably Adversarially Robust Detection of Out-of-Distribution Data (Almost) for Free »
Alexander Meinke · Julian Bitterwolf · Matthias Hein -
2021 Poster: Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks »
Maksym Yatsura · Jan Metzen · Matthias Hein -
2020 Poster: Certifiably Adversarially Robust Detection of Out-of-Distribution Data »
Julian Bitterwolf · Alexander Meinke · Matthias Hein -
2019 : Break / Poster Session 1 »
Antonia Marcu · Yao-Yuan Yang · Pascale Gourdeau · Chen Zhu · Thodoris Lykouris · Jianfeng Chi · Mark Kozdoba · Arjun Nitin Bhagoji · Xiaoxia Wu · Jay Nandy · Michael T Smith · Bingyang Wen · Yuege Xie · Konstantinos Pitas · Suprosanna Shit · Maksym Andriushchenko · Dingli Yu · Gaël Letarte · Misha Khodak · Hussein Mozannar · Chara Podimata · James Foulds · Yizhen Wang · Huishuai Zhang · Ondrej Kuzelka · Alexander Levine · Nan Lu · Zakaria Mhammedi · Paul Viallard · Diana Cai · Lovedeep Gondara · James Lucas · Yasaman Mahdaviyeh · Aristide Baratin · Rishi Bommasani · Alessandro Barp · Andrew Ilyas · Kaiwen Wu · Jens Behrmann · Omar Rivasplata · Amir Nazemi · Aditi Raghunathan · Will Stephenson · Sahil Singla · Akhil Gupta · YooJung Choi · Yannic Kilcher · Clare Lyle · Edoardo Manino · Andrew Bennett · Zhi Xu · Niladri Chatterji · Emre Barut · Flavien Prost · Rodrigo Toro Icarte · Arno Blaas · Chulhee Yun · Sahin Lale · YiDing Jiang · Tharun Kumar Reddy Medini · Ashkan Rezaei · Alexander Meinke · Stephen Mell · Gary Kazantsev · Shivam Garg · Aradhana Sinha · Vishnu Lokhande · Geovani Rizk · Han Zhao · Aditya Kumar Akash · Jikai Hou · Ali Ghodsi · Matthias Hein · Tyler Sypherd · Yichen Yang · Anastasia Pentina · Pierre Gillot · Antoine Ledent · Guy Gur-Ari · Noah MacAulay · Tianzong Zhang -
2019 Poster: Provably robust boosted decision stumps and trees against adversarial attacks »
Maksym Andriushchenko · Matthias Hein -
2019 Poster: Generalized Matrix Means for Semi-Supervised Learning with Multilayer Graphs »
Pedro Mercado · Francesco Tudisco · Matthias Hein -
2018 Poster: Scaling provable adversarial defenses »
Eric Wong · Frank Schmidt · Jan Hendrik Metzen · J. Zico Kolter