Skip to yearly menu bar Skip to main content


Poster
in
Workshop: OPT 2022: Optimization for Machine Learning

Rethinking Sharpness-Aware Minimization as Variational Inference

Szilvia Ujváry · Zsigmond Telek · Anna Kerekes · Anna Mészáros · Ferenc Huszar


Abstract:

Sharpness-aware minimisation (SAM) aims to improve the generalisation of gradient-based learn-ing by seeking out flat minima. In this work, we establish connections between SAM and mean-field variational inference (MFVI) of neural network parameters. We show that both these methodshave interpretations as optimizing notions of flatness, and when using the reparametrisation trick,they both boil down to calculating the gradient at a perturbed version of the current mean param-eter. This thinking motivates our study of algorithms that combine or interpolate between SAMand MFVI. We evaluate the proposed variational algorithms on several benchmark datasets, andcompare their performance to variants of SAM. Taking a broader perspective, our work suggeststhat SAM-like updates can be used as a drop-in replacement for the reparametrisation trick.

Chat is not available.