Timezone: »
Modern deep learning models are over-parameterized, where the optimization setup strongly affects the generalization performance. A key element of reliable optimization for these systems is the modification of the loss function. Sharpness-Aware Minimization (SAM) modifies the underlying loss function to guide descent methods towards flatter minima, which arguably have better generalization abilities. In this paper, we focus on a variant of SAM known as mSAM, which, during training, averages the updates generated by adversarial perturbations across several disjoint shards of a mini-batch. Recent work suggests that mSAM can outperform SAM in terms of test accuracy. However, a comprehensive empirical study of mSAM is missing from the literature---previous results have mostly been limited to specific architectures and datasets. To that end, this paper presents a thorough empirical evaluation of mSAM on various tasks and datasets. We provide a flexible implementation of mSAM and compare the generalization performance of mSAM to the performance of SAM and vanilla training on different image classification and natural language processing tasks. We also conduct careful experiments to understand the computational cost of training with mSAM, its sensitivity to hyperparameters and its correlation with the flatness of the loss landscape. Our analysis reveals that mSAM yields superior generalization performance and flatter minima, compared to SAM, across a wide range of tasks without significantly increasing computational costs.
Author Information
Kayhan Behdin (Massachusetts Institute of Technology)
Qingquan Song (LinkedIn)
Aman Gupta (LinkedIn)
Sathiya Selvaraj (LinkedIn)
David Durfee
Ayan Acharya (University of Texas at Austin)
Rahul Mazumder (MIT)
More from the Same Authors
-
2021 : Adam vs. SGD: Closing the generalization gap on image classification »
Aman Gupta · Rohan Ramanath · Jun Shi · Sathiya Keerthi -
2021 : Newer is not always better: Rethinking transferability metrics, their peculiarities, stability and performance »
Shibal Ibrahim · Natalia Ponomareva · Rahul Mazumder -
2022 : Network Pruning at Scale: A Discrete Optimization Approach »
Wenyu Chen · Riade Benbaki · Xiang Meng · Rahul Mazumder -
2022 : A Light-speed Linear Program Solver for Personalized Recommendation with Diversity Constraints »
Miao Cheng · Haoyue Wang · Aman Gupta · Rahul Mazumder · Sathiya Selvaraj · Kinjal Basu -
2022 : Variational Graph Auto-Encoders for Heterogeneous Information Network »
Abhishek Dalvi · Ayan Acharya · Jing Gao · Vasant Honavar -
2022 : Poster Session 1 »
Andrew Lowy · Thomas Bonnier · Yiling Xie · Guy Kornowski · Simon Schug · Seungyub Han · Nicolas Loizou · xinwei zhang · Laurent Condat · Tabea E. Röber · Si Yi Meng · Marco Mondelli · Runlong Zhou · Eshaan Nichani · Adrian Goldwaser · Rudrajit Das · Kayhan Behdin · Atish Agarwala · Mukul Gagrani · Gary Cheng · Tian Li · Haoran Sun · Hossein Taheri · Allen Liu · Siqi Zhang · Dmitrii Avdiukhin · Bradley Brown · Miaolan Xie · Junhyung Lyle Kim · Sharan Vaswani · Xinmeng Huang · Ganesh Ramachandra Kini · Angela Yuan · Weiqiang Zheng · Jiajin Li -
2022 Poster: Pushing the limits of fairness impossibility: Who's the fairest of them all? »
Brian Hsu · Rahul Mazumder · Preetam Nandy · Kinjal Basu -
2021 : Poster Session 2 (gather.town) »
Wenjie Li · Akhilesh Soni · Jinwuk Seok · Jianhao Ma · Jeffery Kline · Mathieu Tuli · Miaolan Xie · Robert Gower · Quanqi Hu · Matteo Cacciola · Yuanlu Bai · Boyue Li · Wenhao Zhan · Shentong Mo · Junhyung Lyle Kim · Sajad Fathi Hafshejani · Chris Junchi Li · Zhishuai Guo · Harshvardhan Harshvardhan · Neha Wadia · Tatjana Chavdarova · Difan Zou · Zixiang Chen · Aman Gupta · Jacques Chen · Betty Shea · Benoit Dherin · Aleksandr Beznosikov -
2021 Poster: DSelect-k: Differentiable Selection in the Mixture of Experts with Applications to Multi-Task Learning »
Hussein Hazimeh · Zhe Zhao · Aakanksha Chowdhery · Maheswaran Sathiamoorthy · Yihua Chen · Rahul Mazumder · Lichan Hong · Ed Chi