Skip to yearly menu bar Skip to main content


Poster

Direct 0-1 Loss Minimization and Margin Maximization with Boosting

Shaodan Zhai · Tian Xia · Ming Tan · Shaojun Wang

Harrah's Special Events Center, 2nd Floor

Abstract:

We propose a boosting method, DirectBoost, a greedy coordinate descent algorithm that builds an ensemble classifier of weak classifiers through directly minimizing empirical classification error over labeled training examples; once the training classification error is reduced to a local coordinatewise minimum, DirectBoost runs a greedy coordinate ascent algorithm that continuously adds weak classifiers to maximize any targeted arbitrarily defined margins until reaching a local coordinatewise maximum of the margins in a certain sense. Experimental results on a collection of machine-learning benchmark datasets show that DirectBoost gives consistently better results than AdaBoost, LogitBoost, LPBoost with column generation and BrownBoost, and is noise tolerant when it maximizes an n'th order bottom sample margin.

Live content is unavailable. Log in and register to view live content