Skip to yearly menu bar Skip to main content


Poster

The Implicit Bias of AdaGrad on Separable Data

Qian Qian · Xiaoyuan Qian

East Exhibition Hall B + C #247

Keywords: [ Theory ] [ Regularization ] [ Optimization ] [ Convex Optimization ]


Abstract:

We study the implicit bias of AdaGrad on separable linear classification problems. We show that AdaGrad converges to a direction that can be characterized as the solution of a quadratic optimization problem with the same feasible set as the hard SVM problem. We also give a discussion about how different choices of the hyperparameters of AdaGrad may impact this direction. This provides a deeper understanding of why adaptive methods do not seem to have the generalization ability as good as gradient descent does in practice.

Live content is unavailable. Log in and register to view live content