Timezone: »
For many machine learning problems, there are some inputs that are known to be positively (or negatively) related to the output, and in such cases training the model to respect that monotonic relationship can provide regularization, and makes the model more interpretable. However, flexible monotonic functions are computationally challenging to learn beyond a few features. We break through this barrier by learning ensembles of monotonic calibrated interpolated look-up tables (lattices). A key contribution is an automated algorithm for selecting feature subsets for the ensemble base models. We demonstrate that compared to random forests, these ensembles produce similar or better accuracy, while providing guaranteed monotonicity consistent with prior knowledge, smaller model size and faster evaluation.
Author Information
Mahdi Milani Fard (Google)
Kevin Canini (Google)
Andrew Cotter (Google)
Jan Pfeifer (Google)
Maya Gupta (Google)
More from the Same Authors
-
2020 Poster: Approximate Heavily-Constrained Learning with Lagrange Multiplier Models »
Harikrishna Narasimhan · Andrew Cotter · Yichen Zhou · Serena Wang · Wenshuo Guo -
2020 Poster: Robust Optimization for Fairness with Noisy Protected Groups »
Serena Wang · Wenshuo Guo · Harikrishna Narasimhan · Andrew Cotter · Maya Gupta · Michael Jordan -
2019 Poster: Optimizing Generalized Rate Metrics with Three Players »
Harikrishna Narasimhan · Andrew Cotter · Maya Gupta -
2019 Oral: Optimizing Generalized Rate Metrics with Three Players »
Harikrishna Narasimhan · Andrew Cotter · Maya Gupta -
2019 Poster: On Making Stochastic Classifiers Deterministic »
Andrew Cotter · Maya Gupta · Harikrishna Narasimhan -
2019 Oral: On Making Stochastic Classifiers Deterministic »
Andrew Cotter · Maya Gupta · Harikrishna Narasimhan -
2018 Poster: Diminishing Returns Shape Constraints for Interpretability and Regularization »
Maya Gupta · Dara Bahri · Andrew Cotter · Kevin Canini -
2018 Poster: To Trust Or Not To Trust A Classifier »
Heinrich Jiang · Been Kim · Melody Guan · Maya Gupta -
2017 Poster: Deep Lattice Networks and Partial Monotonic Functions »
Seungil You · David Ding · Kevin Canini · Jan Pfeifer · Maya Gupta -
2016 Poster: Launch and Iterate: Reducing Prediction Churn »
Mahdi Milani Fard · Quentin Cormier · Kevin Canini · Maya Gupta -
2016 Poster: Satisfying Real-world Goals with Dataset Constraints »
Gabriel Goh · Andrew Cotter · Maya Gupta · Michael P Friedlander