Timezone: »
Given a task of predicting Y from X, a loss function L, and a set of probability distributions Gamma on (X,Y), what is the optimal decision rule minimizing the worstcase expected loss over Gamma? In this paper, we address this question by introducing a generalization of the maximum entropy principle. Applying this principle to sets of distributions with marginal on X constrained to be the empirical marginal, we provide a minimax interpretation of the maximum likelihood problem over generalized linear models, which connects the minimax problem for each loss function to a generalized linear model. While in some cases such as quadratic and logarithmic loss functions we revisit wellknown linear and logistic regression models, our approach reveals novel models for other loss functions. In particular, for the 01 loss we derive a classification approach which we call the minimax SVM. The minimax SVM minimizes the worstcase expected 01 loss over the proposed Gamma by solving a tractable optimization problem. We perform several numerical experiments in all of which the minimax SVM outperforms the SVM.
Author Information
Farzan Farnia (Stanford University)
David Tse (Stanford University)
More from the Same Authors

2019 Poster: Ultra Fast Medoid Identification via Correlated Sequential Halving »
Tavor Baharav · David Tse 
2018 Poster: Porcupine Neural Networks: Approximating Neural Network Landscapes »
Soheil Feizi · Hamid Javadi · Jesse Zhang · David Tse 
2018 Poster: A Convex Duality Framework for GANs »
Farzan Farnia · David Tse 
2017 Poster: Tensor Biclustering »
Soheil Feizi · Hamid Javadi · David Tse 
2017 Poster: NeuralFDR: Learning Discovery Thresholds from Hypothesis Features »
Fei Xia · Martin J Zhang · James Zou · David Tse 
2015 Poster: Discrete Rényi Classifiers »
Meisam Razaviyayn · Farzan Farnia · David Tse