Timezone: »

 
Oral
The Marginal Value of Adaptive Gradient Methods in Machine Learning
Ashia C Wilson · Becca Roelofs · Mitchell Stern · Nati Srebro · Benjamin Recht

Wed Dec 06 02:50 PM -- 03:05 PM (PST) @ Hall C

Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple over-parameterized problems, adaptive methods often find drastically different solutions than vanilla stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, SGD achieves zero test error, and AdaGrad and Adam attain test errors arbitrarily close to 1/2. We additionally study the empirical generalization capability of adaptive methods on several state-of-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks.

Author Information

Ashia C Wilson (UC Berkeley)
Becca Roelofs (UC Berkeley)
Mitchell Stern (UC Berkeley)
Nati Srebro (TTI-Chicago)
Benjamin Recht (UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors