Timezone: »
Consider the binary classification problem of predicting a target variable Y from a discrete feature vector X = (X1,...,Xd). When the probability distribution P(X,Y) is known, the optimal classifier, leading to the minimum misclassification rate, is given by the Maximum A-posteriori Probability (MAP) decision rule. However, in practice, estimating the complete joint distribution P(X,Y) is computationally and statistically impossible for large values of d. Therefore, an alternative approach is to first estimate some low order marginals of the joint probability distribution P(X,Y) and then design the classifier based on the estimated low order marginals. This approach is also helpful when the complete training data instances are not available due to privacy concerns. In this work, we consider the problem of designing the optimum classifier based on some estimated low order marginals of (X,Y). We prove that for a given set of marginals, the minimum Hirschfeld-Gebelein-R´enyi (HGR) correlation principle introduced in [1] leads to a randomized classification rule which is shown to have a misclassification rate no larger than twice the misclassification rate of the optimal classifier. Then, we show that under a separability condition, the proposed algorithm is equivalent to a randomized linear regression approach which naturally results in a robust feature selection method selecting a subset of features having the maximum worst case HGR correlation with the target variable. Our theoretical upper-bound is similar to the recent Discrete Chebyshev Classifier (DCC) approach [2], while the proposed algorithm has significant computational advantages since it only requires solving a least square optimization problem. Finally, we numerically compare our proposed algorithm with the DCC classifier and show that the proposed algorithm results in better misclassification rate over various UCI data repository datasets.
Author Information
Meisam Razaviyayn (Stanford University)
Farzan Farnia (Stanford University)
David Tse (Stanford University)
More from the Same Authors
-
2022 Poster: Beyond the Best: Distribution Functional Estimation in Infinite-Armed Bandits »
Yifei Wang · Tavor Baharav · Yanjun Han · Jiantao Jiao · David Tse -
2019 Poster: Ultra Fast Medoid Identification via Correlated Sequential Halving »
Tavor Baharav · David Tse -
2018 Poster: Porcupine Neural Networks: Approximating Neural Network Landscapes »
Soheil Feizi · Hamid Javadi · Jesse Zhang · David Tse -
2018 Poster: A Convex Duality Framework for GANs »
Farzan Farnia · David Tse -
2017 Poster: Tensor Biclustering »
Soheil Feizi · Hamid Javadi · David Tse -
2017 Poster: NeuralFDR: Learning Discovery Thresholds from Hypothesis Features »
Fei Xia · Martin J Zhang · James Zou · David Tse -
2016 Poster: A Minimax Approach to Supervised Learning »
Farzan Farnia · David Tse