Timezone: »
Batch Normalization (BatchNorm) is a widely adopted technique that enables faster and more stable training of deep neural networks (DNNs). Despite its pervasiveness, the exact reasons for BatchNorm's effectiveness are still poorly understood. The popular belief is that this effectiveness stems from controlling the change of the layers' input distributions during training to reduce the so-called "internal covariate shift". In this work, we demonstrate that such distributional stability of layer inputs has little to do with the success of BatchNorm. Instead, we uncover a more fundamental impact of BatchNorm on the training process: it makes the optimization landscape significantly smoother. This smoothness induces a more predictive and stable behavior of the gradients, allowing for faster training.
Author Information
Shibani Santurkar (MIT)
Dimitris Tsipras (MIT)
Andrew Ilyas (MIT)
Aleksander Madry (MIT)
Aleksander Madry is the NBX Associate Professor of Computer Science in the MIT EECS Department and a principal investigator in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his PhD from MIT in 2011 and, prior to joining the MIT faculty, he spent some time at Microsoft Research New England and on the faculty of EPFL. Aleksander's research interests span algorithms, continuous optimization, science of deep learning and understanding machine learning from a robustness perspective. His work has been recognized with a number of awards, including an NSF CAREER Award, an Alfred P. Sloan Research Fellowship, an ACM Doctoral Dissertation Award Honorable Mention, and 2018 Presburger Award.
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: How Does Batch Normalization Help Optimization? »
Tue. Dec 4th 09:25 -- 09:40 PM Room Room 220 E
More from the Same Authors
-
2021 : Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis -
2021 : Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis -
2022 : A Unified Framework for Comparing Learning Algorithms »
Harshay Shah · Sung Min Park · Andrew Ilyas · Aleksander Madry -
2022 : Invited Talk: Aleksander Mądry »
Aleksander Madry -
2022 Poster: 3DB: A Framework for Debugging Computer Vision Models »
Guillaume Leclerc · Hadi Salman · Andrew Ilyas · Sai Vemprala · Logan Engstrom · Vibhav Vineet · Kai Xiao · Pengchuan Zhang · Shibani Santurkar · Greg Yang · Ashish Kapoor · Aleksander Madry -
2021 : Spotlight 4: Estimation of Standard Asymmetric Auction Models »
Yeshwanth Cherapanamjeri · Constantinos Daskalakis · Andrew Ilyas · Emmanouil Zampetakis -
2021 : Discussion: Aleksander Mądry, Ernest Mwebaze, Suchi Saria »
Aleksander Madry · Ernest Mwebaze · Suchi Saria -
2021 : Live panel: Did we solve ImageNet? »
Shibani Santurkar · Alexander Kolesnikov · Becca Roelofs -
2021 : ML Model Debugging: A Data Perspective »
Aleksander Madry -
2021 : From ImageNet to Image Classification »
Shibani Santurkar -
2021 Poster: Unadversarial Examples: Designing Objects for Robust Vision »
Hadi Salman · Andrew Ilyas · Logan Engstrom · Sai Vemprala · Aleksander Madry · Ashish Kapoor -
2021 Poster: Editing a classifier by rewriting its prediction rules »
Shibani Santurkar · Dimitris Tsipras · Mahalaxmi Elango · David Bau · Antonio Torralba · Aleksander Madry -
2020 : Q&A with Shibani »
Shibani Santurkar -
2020 : Invited: Shibani Santurkar »
Shibani Santurkar -
2020 : Panel »
Kilian Weinberger · Maria De-Arteaga · Shibani Santurkar · Jonathan Frankle · Deborah Raji -
2020 : What Do Our Models Learn? »
Aleksander Madry -
2020 Poster: On Adaptive Attacks to Adversarial Example Defenses »
Florian Tramer · Nicholas Carlini · Wieland Brendel · Aleksander Madry -
2020 Poster: Do Adversarially Robust ImageNet Models Transfer Better? »
Hadi Salman · Andrew Ilyas · Logan Engstrom · Ashish Kapoor · Aleksander Madry -
2020 Oral: Do Adversarially Robust ImageNet Models Transfer Better? »
Hadi Salman · Andrew Ilyas · Logan Engstrom · Ashish Kapoor · Aleksander Madry -
2019 : Break / Poster Session 1 »
Antonia Marcu · Yao-Yuan Yang · Pascale Gourdeau · Chen Zhu · Thodoris Lykouris · Jianfeng Chi · Mark Kozdoba · Arjun Nitin Bhagoji · Xiaoxia Wu · Jay Nandy · Michael T Smith · Bingyang Wen · Yuege Xie · Konstantinos Pitas · Suprosanna Shit · Maksym Andriushchenko · Dingli Yu · Gaël Letarte · Misha Khodak · Hussein Mozannar · Chara Podimata · James Foulds · Yizhen Wang · Huishuai Zhang · Ondrej Kuzelka · Alexander Levine · Nan Lu · Zakaria Mhammedi · Paul Viallard · Diana Cai · Lovedeep Gondara · James Lucas · Yasaman Mahdaviyeh · Aristide Baratin · Rishi Bommasani · Alessandro Barp · Andrew Ilyas · Kaiwen Wu · Jens Behrmann · Omar Rivasplata · Amir Nazemi · Aditi Raghunathan · Will Stephenson · Sahil Singla · Akhil Gupta · YooJung Choi · Yannic Kilcher · Clare Lyle · Edoardo Manino · Andrew Bennett · Zhi Xu · Niladri Chatterji · Emre Barut · Flavien Prost · Rodrigo Toro Icarte · Arno Blaas · Chulhee Yun · Sahin Lale · YiDing Jiang · Tharun Kumar Reddy Medini · Ashkan Rezaei · Alexander Meinke · Stephen Mell · Gary Kazantsev · Shivam Garg · Aradhana Sinha · Vishnu Lokhande · Geovani Rizk · Han Zhao · Aditya Kumar Akash · Jikai Hou · Ali Ghodsi · Matthias Hein · Tyler Sypherd · Yichen Yang · Anastasia Pentina · Pierre Gillot · Antoine Ledent · Guy Gur-Ari · Noah MacAulay · Tianzong Zhang -
2019 Workshop: Machine Learning with Guarantees »
Ben London · Gintare Karolina Dziugaite · Daniel Roy · Thorsten Joachims · Aleksander Madry · John Shawe-Taylor -
2019 Poster: Image Synthesis with a Single (Robust) Classifier »
Shibani Santurkar · Andrew Ilyas · Dimitris Tsipras · Logan Engstrom · Brandon Tran · Aleksander Madry -
2019 Poster: Adversarial Examples Are Not Bugs, They Are Features »
Andrew Ilyas · Shibani Santurkar · Dimitris Tsipras · Logan Engstrom · Brandon Tran · Aleksander Madry -
2019 Spotlight: Adversarial Examples Are Not Bugs, They Are Features »
Andrew Ilyas · Shibani Santurkar · Dimitris Tsipras · Logan Engstrom · Brandon Tran · Aleksander Madry -
2018 : Adversarial Vision Challenge: Shooting ML Models in the Dark: The Landscape of Blackbox Attacks »
Aleksander Madry -
2018 Poster: Spectral Signatures in Backdoor Attacks »
Brandon Tran · Jerry Li · Aleksander Madry -
2018 Poster: Adversarially Robust Generalization Requires More Data »
Ludwig Schmidt · Shibani Santurkar · Dimitris Tsipras · Kunal Talwar · Aleksander Madry -
2018 Spotlight: Adversarially Robust Generalization Requires More Data »
Ludwig Schmidt · Shibani Santurkar · Dimitris Tsipras · Kunal Talwar · Aleksander Madry -
2018 Tutorial: Adversarial Robustness: Theory and Practice »
J. Zico Kolter · Aleksander Madry -
2017 : Synthesizing Robust Adversarial Examples »
Andrew Ilyas · Anish Athalye · Logan Engstrom · Kevin Kwok -
2017 : Poster Spotlights I »
Taesik Na · Yang Song · Aman Sinha · Richard Shin · Qiuyuan Huang · Nina Narodytska · Matt Staib · Kexin Pei · Fnu Suya · Amirata Ghorbani · Jacob Buckman · Matthias Hein · Huan Zhang · Yanjun Qi · Yuan Tian · Min Du · Dimitris Tsipras