Skip to yearly menu bar Skip to main content


Poster

Early stopping for kernel boosting algorithms: A general analysis with localized complexities

Yuting Wei · Fanny Yang · Martin Wainwright

Pacific Ballroom #215

Keywords: [ Boosting and Ensemble Methods ] [ Regularization ]


Abstract: Early stopping of iterative algorithms is a widely-used form of regularization in statistical learning, commonly used in conjunction with boosting and related gradient-type algorithms. Although consistency results have been established in some settings, such estimators are less well-understood than their analogues based on penalized regularization. In this paper, for a relatively broad class of loss functions and boosting algorithms (including $L^2$-boost, LogitBoost and AdaBoost, among others), we connect the performance of a stopped iterate to the localized Rademacher/Gaussian complexity of the associated function class. This connection allows us to show that local fixed point analysis, now standard in the analysis of penalized estimators, can be used to derive optimal stopping rules. We derive such stopping rules in detail for various kernel classes, and illustrate the correspondence of our theory with practice for Sobolev kernel classes.

Live content is unavailable. Log in and register to view live content