Timezone: »

Analyzing the Generalization Capability of SGLD Using Properties of Gaussian Channels
Hao Wang · Yizhe Huang · Rui Gao · Flavio Calmon

Thu Dec 09 12:30 AM -- 02:00 AM (PST) @ Virtual

Optimization is a key component for training machine learning models and has a strong impact on their generalization. In this paper, we consider a particular optimization method---the stochastic gradient Langevin dynamics (SGLD) algorithm---and investigate the generalization of models trained by SGLD. We derive a new generalization bound by connecting SGLD with Gaussian channels found in information and communication theory. Our bound can be computed from the training data and incorporates the variance of gradients for quantifying a particular kind of "sharpness" of the loss landscape. We also consider a closely related algorithm with SGLD, namely differentially private SGD (DP-SGD). We prove that the generalization capability of DP-SGD can be amplified by iteration. Specifically, our bound can be sharpened by including a time-decaying factor if the DP-SGD algorithm outputs the last iterate while keeping other iterates hidden. This decay factor enables the contribution of early iterations to our bound to reduce with time and is established by strong data processing inequalities---a fundamental tool in information theory. We demonstrate our bound through numerical experiments, showing that it can predict the behavior of the true generalization gap.

Author Information

Hao Wang (Harvard University)
Yizhe Huang (University of Texas, Austin)
Rui Gao (University of Texas at Austin)
Flavio Calmon (Harvard University)

More from the Same Authors