Timezone: »
Many modern machine learning applications come with complex and nuanced design goals such as minimizing the worst-case error, satisfying a given precision or recall target, or enforcing group-fairness constraints. Popular techniques for optimizing such non-decomposable objectives reduce the problem into a sequence of cost-sensitive learning tasks, each of which is then solved by re-weighting the training loss with example-specific costs. We point out that the standard approach of re-weighting the loss to incorporate label costs can produce unsatisfactory results when used to train over-parameterized models. As a remedy, we propose new cost- sensitive losses that extend the classical idea of logit adjustment to handle more general cost matrices. Our losses are calibrated, and can be further improved with distilled labels from a teacher model. Through experiments on benchmark image datasets, we showcase the effectiveness of our approach in training ResNet models with common robust and constrained optimization objectives.
Author Information
Harikrishna Narasimhan (Google Research)
Aditya Menon (Google)
More from the Same Authors
-
2022 : Effect of mixup Training on Representation Learning »
Arslan Chaudhry · Aditya Menon · Andreas Veit · Sadeep Jayasumana · Srikumar Ramalingam · Sanjiv Kumar -
2022 Poster: Post-hoc estimators for learning to defer to an expert »
Harikrishna Narasimhan · Wittawat Jitkrittum · Aditya Menon · Ankit Rawat · Sanjiv Kumar -
2020 Poster: Approximate Heavily-Constrained Learning with Lagrange Multiplier Models »
Harikrishna Narasimhan · Andrew Cotter · Yichen Zhou · Serena Wang · Wenshuo Guo -
2020 Poster: Fair Performance Metric Elicitation »
Gaurush Hiranandani · Harikrishna Narasimhan · Sanmi Koyejo -
2020 Poster: Consistent Plug-in Classifiers for Complex Objectives and Constraints »
Shiv Kumar Tavker · Harish Guruprasad Ramaswamy · Harikrishna Narasimhan -
2020 Poster: Robust Optimization for Fairness with Noisy Protected Groups »
Serena Wang · Wenshuo Guo · Harikrishna Narasimhan · Andrew Cotter · Maya Gupta · Michael Jordan -
2020 Poster: Robust large-margin learning in hyperbolic space »
Melanie Weber · Manzil Zaheer · Ankit Singh Rawat · Aditya Menon · Sanjiv Kumar -
2019 Poster: Optimizing Generalized Rate Metrics with Three Players »
Harikrishna Narasimhan · Andrew Cotter · Maya Gupta -
2019 Poster: Noise-tolerant fair classification »
Alex Lamy · Ziyuan Zhong · Aditya Menon · Nakul Verma -
2019 Oral: Optimizing Generalized Rate Metrics with Three Players »
Harikrishna Narasimhan · Andrew Cotter · Maya Gupta -
2019 Poster: On Making Stochastic Classifiers Deterministic »
Andrew Cotter · Maya Gupta · Harikrishna Narasimhan -
2019 Poster: Multilabel reductions: what is my loss optimising? »
Aditya Menon · Ankit Singh Rawat · Sashank Reddi · Sanjiv Kumar -
2019 Spotlight: Multilabel reductions: what is my loss optimising? »
Aditya Menon · Ankit Singh Rawat · Sashank Reddi · Sanjiv Kumar -
2019 Oral: On Making Stochastic Classifiers Deterministic »
Andrew Cotter · Maya Gupta · Harikrishna Narasimhan