Skip to yearly menu bar Skip to main content


Poster

Path-SGD: Path-Normalized Optimization in Deep Neural Networks

Behnam Neyshabur · Russ Salakhutdinov · Nati Srebro

210 C #33

Abstract:

We revisit the choice of SGD for training deep neural networks by reconsidering the appropriate geometry in which to optimize the weights. We argue for a geometry invariant to rescaling of weights that does not affect the output of the network, and suggest Path-SGD, which is an approximate steepest descent method with respect to a path-wise regularizer related to max-norm regularization. Path-SGD is easy and efficient to implement and leads to empirical gains over SGD and AdaGrad.

Live content is unavailable. Log in and register to view live content