Timezone: »
We address the problem of aggregating an ensemble of predictors with known loss bounds in a semi-supervised binary classification setting, to minimize prediction loss incurred on the unlabeled data. We find the minimax optimal predictions for a very general class of loss functions including all convex and many non-convex losses, extending a recent analysis of the problem for misclassification error. The result is a family of semi-supervised ensemble aggregation algorithms which are as efficient as linear learning by convex optimization, but are minimax optimal without any relaxations. Their decision rules take a form familiar in decision theory -- applying sigmoid functions to a notion of ensemble margin -- without the assumptions typically made in margin-based learning.
Author Information
Akshay Balsubramani (UC San Diego)
Yoav S Freund (University of California, San Diego)
More from the Same Authors
-
2022 Poster: When is the Convergence Time of Langevin Algorithms Dimension Independent? A Composite Optimization Viewpoint »
Yoav S Freund · Yi-An Ma · Tong Zhang -
2019 Poster: An adaptive nearest neighbor rule for classification »
Akshay Balsubramani · Sanjoy Dasgupta · yoav Freund · Shay Moran -
2019 Spotlight: An adaptive nearest neighbor rule for classification »
Akshay Balsubramani · Sanjoy Dasgupta · yoav Freund · Shay Moran -
2019 Poster: Faster Boosting with Smaller Memory »
Julaiti Alafate · Yoav S Freund -
2007 Demonstration: Automatic Cameraman »
Yoav S Freund · Evan Ettinger · Brian McFee · Deborah Goshorn · Shankar Shivappa -
2007 Poster: Learning the structure of manifolds using random projections »
Yoav S Freund · Sanjoy Dasgupta · Mayank Kabra · Nakul Verma