Timezone: »
Recent work has shown that state-of-the-art classifiers are quite brittle, in the sense that a small adversarial change of an originally with high confidence correctly classified input leads to a wrong classification again with high confidence. This raises concerns that such classifiers are vulnerable to attacks and calls into question their usage in safety-critical systems. We show in this paper for the first time formal guarantees on the robustness of a classifier by giving instance-specific \emph{lower bounds} on the norm of the input manipulation required to change the classifier decision. Based on this analysis we propose the Cross-Lipschitz regularization functional. We show that using this form of regularization in kernel methods resp. neural networks improves the robustness of the classifier without any loss in prediction performance.
Author Information
Matthias Hein (Saarland University)
Maksym Andriushchenko (Saarland University)
More from the Same Authors
-
2021 : RobustBench: a standardized adversarial robustness benchmark »
Francesco Croce · Maksym Andriushchenko · Vikash Sehwag · Edoardo Debenedetti · Nicolas Flammarion · Mung Chiang · Prateek Mittal · Matthias Hein -
2023 Poster: Transferable Adversarial Robustness for Categorical Data via Universal Robust Embeddings »
Klim Kireev · Maksym Andriushchenko · Carmela Troncoso · Nicolas Flammarion -
2023 Poster: Sharpness-Aware Minimization Leads to Low-Rank Features »
Maksym Andriushchenko · Dara Bahri · Hossein Mobahi · Nicolas Flammarion -
2020 Poster: Understanding and Improving Fast Adversarial Training »
Maksym Andriushchenko · Nicolas Flammarion -
2019 : Maksym Andriushchenko, "Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks" »
Maksym Andriushchenko -
2019 : Break / Poster Session 1 »
Antonia Marcu · Yao-Yuan Yang · Pascale Gourdeau · Chen Zhu · Thodoris Lykouris · Jianfeng Chi · Mark Kozdoba · Arjun Nitin Bhagoji · Xiaoxia Wu · Jay Nandy · Michael T Smith · Bingyang Wen · Yuege Xie · Konstantinos Pitas · Suprosanna Shit · Maksym Andriushchenko · Dingli Yu · GaĆ«l Letarte · Misha Khodak · Hussein Mozannar · Chara Podimata · James Foulds · Yizhen Wang · Huishuai Zhang · Ondrej Kuzelka · Alexander Levine · Nan Lu · Zakaria Mhammedi · Paul Viallard · Diana Cai · Lovedeep Gondara · James Lucas · Yasaman Mahdaviyeh · Aristide Baratin · Rishi Bommasani · Alessandro Barp · Andrew Ilyas · Kaiwen Wu · Jens Behrmann · Omar Rivasplata · Amir Nazemi · Aditi Raghunathan · Will Stephenson · Sahil Singla · Akhil Gupta · YooJung Choi · Yannic Kilcher · Clare Lyle · Edoardo Manino · Andrew Bennett · Zhi Xu · Niladri Chatterji · Emre Barut · Flavien Prost · Rodrigo Toro Icarte · Arno Blaas · Chulhee Yun · Sahin Lale · YiDing Jiang · Tharun Kumar Reddy Medini · Ashkan Rezaei · Alexander Meinke · Stephen Mell · Gary Kazantsev · Shivam Garg · Aradhana Sinha · Vishnu Lokhande · Geovani Rizk · Han Zhao · Aditya Kumar Akash · Jikai Hou · Ali Ghodsi · Matthias Hein · Tyler Sypherd · Yichen Yang · Anastasia Pentina · Pierre Gillot · Antoine Ledent · Guy Gur-Ari · Noah MacAulay · Tianzong Zhang -
2019 Poster: Provably robust boosted decision stumps and trees against adversarial attacks »
Maksym Andriushchenko · Matthias Hein -
2015 Poster: Efficient Output Kernel Learning for Multiple Tasks »
Pratik Kumar Jawanpuria · Maksim Lapin · Matthias Hein · Bernt Schiele -
2015 Poster: Top-k Multiclass SVM »
Maksim Lapin · Matthias Hein · Bernt Schiele -
2015 Spotlight: Top-k Multiclass SVM »
Maksim Lapin · Matthias Hein · Bernt Schiele -
2015 Poster: Regularization-Free Estimation in Trace Regression with Symmetric Positive Semidefinite Matrices »
Martin Slawski · Ping Li · Matthias Hein