Timezone: »

 
Poster
On the Importance of Gradient Norm in PAC-Bayesian Bounds
Itai Gat · Yossi Adi · Alex Schwing · Tamir Hazan

Tue Nov 29 09:00 AM -- 11:00 AM (PST) @ Hall J #931

Generalization bounds which assess the difference between the true risk and the empirical risk have been studied extensively. However, to obtain bounds, current techniques use strict assumptions such as a uniformly bounded or a Lipschitz loss function. To avoid these assumptions, in this paper, we follow an alternative approach: we relax uniform bounds assumptions by using on-average bounded loss and on-average bounded gradient norm assumptions. Following this relaxation, we propose a new generalization bound that exploits the contractivity of the log-Sobolev inequalities. These inequalities add an additional loss-gradient norm term to the generalization bound, which is intuitively a surrogate of the model complexity. We apply the proposed bound on Bayesian deep nets and empirically analyze the effect of this new loss-gradient norm term on different neural architectures.

Author Information

Itai Gat (Technion)
Yossi Adi (Facebook AI Research)
Alex Schwing (University of Illinois at Urbana-Champaign)
Tamir Hazan (Technion)

More from the Same Authors