Timezone: »
The goal of minimizing misclassification error on a training set is often just one of several real-world goals that might be defined on different datasets. For example, one may require a classifier to also make positive predictions at some specified rate for some subpopulation (fairness), or to achieve a specified empirical recall. Other real-world goals include reducing churn with respect to a previously deployed model, or stabilizing online training. In this paper we propose handling multiple goals on multiple datasets by training with dataset constraints, using the ramp penalty to accurately quantify costs, and present an efficient algorithm to approximately optimize the resulting non-convex constrained optimization problem. Experiments on both benchmark and real-world industry datasets demonstrate the effectiveness of our approach.
Author Information
Gabriel Goh (UC Davis)
Andrew Cotter (Google)
Maya Gupta (Google)
Michael P Friedlander (UC Davis)
More from the Same Authors
-
2020 Poster: Approximate Heavily-Constrained Learning with Lagrange Multiplier Models »
Harikrishna Narasimhan · Andrew Cotter · Yichen Zhou · Serena Wang · Wenshuo Guo -
2020 Poster: Robust Optimization for Fairness with Noisy Protected Groups »
Serena Wang · Wenshuo Guo · Harikrishna Narasimhan · Andrew Cotter · Maya Gupta · Michael Jordan -
2019 Poster: Optimizing Generalized Rate Metrics with Three Players »
Harikrishna Narasimhan · Andrew Cotter · Maya Gupta -
2019 Oral: Optimizing Generalized Rate Metrics with Three Players »
Harikrishna Narasimhan · Andrew Cotter · Maya Gupta -
2019 Poster: On Making Stochastic Classifiers Deterministic »
Andrew Cotter · Maya Gupta · Harikrishna Narasimhan -
2019 Oral: On Making Stochastic Classifiers Deterministic »
Andrew Cotter · Maya Gupta · Harikrishna Narasimhan -
2018 Poster: Diminishing Returns Shape Constraints for Interpretability and Regularization »
Maya Gupta · Dara Bahri · Andrew Cotter · Kevin Canini -
2018 Poster: To Trust Or Not To Trust A Classifier »
Heinrich Jiang · Been Kim · Melody Guan · Maya Gupta -
2017 Poster: Deep Lattice Networks and Partial Monotonic Functions »
Seungil You · David Ding · Kevin Canini · Jan Pfeifer · Maya Gupta -
2016 Poster: Launch and Iterate: Reducing Prediction Churn »
Mahdi Milani Fard · Quentin Cormier · Kevin Canini · Maya Gupta -
2016 Poster: Fast and Flexible Monotonic Functions with Ensembles of Lattices »
Mahdi Milani Fard · Kevin Canini · Andrew Cotter · Jan Pfeifer · Maya Gupta