Timezone: »
Modern gradient boosting software frameworks, such as XGBoost and LightGBM, implement Newton descent in a functional space. At each boosting iteration, their goal is to find the base hypothesis, selected from some base hypothesis class, that is closest to the Newton descent direction in a Euclidean sense. Typically, the base hypothesis class is fixed to be all binary decision trees up to a given depth. In this work, we study a Heterogeneous Newton Boosting Machine (HNBM) in which the base hypothesis class may vary across boosting iterations. Specifically, at each boosting iteration, the base hypothesis class is chosen, from a fixed set of subclasses, by sampling from a probability distribution. We derive a global linear convergence rate for the HNBM under certain assumptions, and show that it agrees with existing rates for Newton's method when the Newton direction can be perfectly fitted by the base hypothesis at each boosting iteration. We then describe a particular realization of a HNBM, SnapBoost, that, at each boosting iteration, randomly selects between either a decision tree of variable depth or a linear regressor with random Fourier features. We describe how SnapBoost is implemented, with a focus on the training complexity. Finally, we present experimental results, using OpenML and Kaggle datasets, that show that SnapBoost is able to achieve better generalization loss than competing boosting frameworks, without taking significantly longer to tune.
Author Information
Thomas Parnell (IBM Research)
Andreea Anghel (IBM Research)
Małgorzata Łazuka (ETH Zürich)
Nikolas Ioannou (IBM Research)
Sebastian Kurella (ETH Zürich)
Peshal Agarwal (ETH Zürich)
Nikolaos Papandreou (IBM Research Zurich)
Haris Pozidis (IBM Research)
More from the Same Authors
-
2019 Poster: SySCD: A System-Aware Parallel Coordinate Descent Algorithm »
Nikolas Ioannou · Celestine Mendler-Dünner · Thomas Parnell -
2019 Spotlight: SySCD: A System-Aware Parallel Coordinate Descent Algorithm »
Nikolas Ioannou · Celestine Mendler-Dünner · Thomas Parnell -
2018 Poster: Snap ML: A Hierarchical Framework for Machine Learning »
Celestine Dünner · Thomas Parnell · Dimitrios Sarigiannis · Nikolas Ioannou · Andreea Anghel · Gummadi Ravi · Madhusudanan Kandasamy · Haralampos Pozidis -
2017 Poster: Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems »
Celestine Dünner · Thomas Parnell · Martin Jaggi