Timezone: »

Adaptive Sampling for Minimax Fair Classification
Shubhanshu Shekhar · Greg Fields · Mohammad Ghavamzadeh · Tara Javidi

Thu Dec 09 08:30 AM -- 10:00 AM (PST) @

Machine learning models trained on uncurated datasets can often end up adversely affecting inputs belonging to underrepresented groups. To address this issue, we consider the problem of adaptively constructing training sets which allow us to learn classifiers that are fair in a {\em minimax} sense. We first propose an adaptive sampling algorithm based on the principle of \emph{optimism}, and derive theoretical bounds on its performance. We also propose heuristic extensions of this algorithm suitable for application to large scale, practical problems. Next, by deriving algorithm independent lower-bounds for a specific class of problems, we show that the performance achieved by our adaptive scheme cannot be improved in general. We then validate the benefits of adaptively constructing training sets via experiments on synthetic tasks with logistic regression classifiers, as well as on several real-world tasks using convolutional neural networks (CNNs).

Author Information

Shubhanshu Shekhar (UCSD)
Greg Fields (University of California, San Diego, University of California, San Diego)
Mohammad Ghavamzadeh (Google Research)
Tara Javidi (University of California San Diego)

More from the Same Authors