Timezone: »
Perturbing BatchNorm and Only BatchNorm Benefits Sharpness-Aware Minimization
Maximilian Mueller · Matthias Hein
Event URL: https://openreview.net/forum?id=yL_iq-Q-ORS »
We investigate the connection between two popular methods commonly used in training deep neural networks: Sharpness-Aware Minimization (SAM) and Batch Normalization. We find that perturbing \textit{only} the affine BatchNorm parameters in the adversarial step of SAM benefits the generalization performance, while excluding them can decrease the performance strongly. We confirm our results across several models and SAM-variants on CIFAR-10 and CIFAR-100 and show preliminary results for ImageNet. Our results provide a practical tweak for training deep networks, but also cast doubt on the commonly accepted explanation of SAM minimizing a sharpness quantity responsible for generalization.
Author Information
Maximilian Mueller (Universtity of Tübingen)
Matthias Hein (University of Tübingen)
More from the Same Authors
-
2022 : Leveraging the Stochastic Predictions of Bayesian Neural Networks for Fluid Simulations »
Maximilian Mueller · Robin Greif · Frank Jenko · Nils Thuerey -
2022 : Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation »
Maksym Yatsura · Kaspar Sakmann · N. Grace Hua · Matthias Hein · Jan Hendrik Metzen -
2022 Poster: Diffusion Visual Counterfactual Explanations »
Maximilian Augustin · Valentyn Boreiko · Francesco Croce · Matthias Hein -
2022 Poster: Provably Adversarially Robust Detection of Out-of-Distribution Data (Almost) for Free »
Alexander Meinke · Julian Bitterwolf · Matthias Hein -
2020 Poster: Certifiably Adversarially Robust Detection of Out-of-Distribution Data »
Julian Bitterwolf · Alexander Meinke · Matthias Hein -
2019 : Break / Poster Session 1 »
Antonia Marcu · Yao-Yuan Yang · Pascale Gourdeau · Chen Zhu · Thodoris Lykouris · Jianfeng Chi · Mark Kozdoba · Arjun Nitin Bhagoji · Xiaoxia Wu · Jay Nandy · Michael T Smith · Bingyang Wen · Yuege Xie · Konstantinos Pitas · Suprosanna Shit · Maksym Andriushchenko · Dingli Yu · Gaël Letarte · Misha Khodak · Hussein Mozannar · Chara Podimata · James Foulds · Yizhen Wang · Huishuai Zhang · Ondrej Kuzelka · Alexander Levine · Nan Lu · Zakaria Mhammedi · Paul Viallard · Diana Cai · Lovedeep Gondara · James Lucas · Yasaman Mahdaviyeh · Aristide Baratin · Rishi Bommasani · Alessandro Barp · Andrew Ilyas · Kaiwen Wu · Jens Behrmann · Omar Rivasplata · Amir Nazemi · Aditi Raghunathan · Will Stephenson · Sahil Singla · Akhil Gupta · YooJung Choi · Yannic Kilcher · Clare Lyle · Edoardo Manino · Andrew Bennett · Zhi Xu · Niladri Chatterji · Emre Barut · Flavien Prost · Rodrigo Toro Icarte · Arno Blaas · Chulhee Yun · Sahin Lale · YiDing Jiang · Tharun Kumar Reddy Medini · Ashkan Rezaei · Alexander Meinke · Stephen Mell · Gary Kazantsev · Shivam Garg · Aradhana Sinha · Vishnu Lokhande · Geovani Rizk · Han Zhao · Aditya Kumar Akash · Jikai Hou · Ali Ghodsi · Matthias Hein · Tyler Sypherd · Yichen Yang · Anastasia Pentina · Pierre Gillot · Antoine Ledent · Guy Gur-Ari · Noah MacAulay · Tianzong Zhang -
2019 Poster: Provably robust boosted decision stumps and trees against adversarial attacks »
Maksym Andriushchenko · Matthias Hein -
2019 Poster: Generalized Matrix Means for Semi-Supervised Learning with Multilayer Graphs »
Pedro Mercado · Francesco Tudisco · Matthias Hein