Skip to yearly menu bar Skip to main content


Poster

The Implicit Bias of AdaGrad on Separable Data

Qian Qian · Xiaoyuan Qian

Keywords: [ Convex Optimization ] [ Optimization ] [ Regularization ] [ Theory ]

[ ]
[ Paper [ Slides
2019 Poster

Abstract:

We study the implicit bias of AdaGrad on separable linear classification problems. We show that AdaGrad converges to a direction that can be characterized as the solution of a quadratic optimization problem with the same feasible set as the hard SVM problem. We also give a discussion about how different choices of the hyperparameters of AdaGrad may impact this direction. This provides a deeper understanding of why adaptive methods do not seem to have the generalization ability as good as gradient descent does in practice.

Chat is not available.