Skip to yearly menu bar Skip to main content


Poster

Faster Boosting with Smaller Memory

Julaiti Alafate · Yoav S Freund

East Exhibition Hall B + C #7

Keywords: [ Boosting and Ensemble Methods ] [ Large Scale Learning ] [ Algorithms ]


Abstract:

State-of-the-art implementations of boosting, such as XGBoost and LightGBM, can process large training sets extremely fast. However, this performance requires that the memory size is sufficient to hold a 2-3 multiple of the training set size. This paper presents an alternative approach to implementing the boosted trees, which achieves a significant speedup over XGBoost and LightGBM, especially when the memory size is small. This is achieved using a combination of three techniques: early stopping, effective sample size, and stratified sampling. Our experiments demonstrate a 10-100 speedup over XGBoost when the training data is too large to fit in memory.

Live content is unavailable. Log in and register to view live content