Timezone: »
The growing literature on "benign overfitting" in overparameterized models has been mostly restricted to regression or binary classification settings; however, most success stories of modern machine learning have been recorded in multiclass settings. Motivated by this discrepancy, we study benign overfitting in multiclass linear classification. Specifically, we consider the following popular training algorithms on separable data: (i) empirical risk minimization (ERM) with cross-entropy loss, which converges to the multiclass support vector machine (SVM) solution; (ii) ERM with least-squares loss, which converges to the min-norm interpolating (MNI) solution; and, (iii) the one-vs-all SVM classifier. Our first key finding is that under a simple sufficient condition, all three algorithms lead to classifiers that interpolate the training data and have equal accuracy. When the data is generated from Gaussian mixtures or a multinomial logistic model, this condition holds under high enough effective overparameterization. Second, we derive novel error bounds on the accuracy of the MNI classifier, thereby showing that all three training algorithms lead to benign overfitting under sufficient overparameterization. Ultimately, our analysis shows that good generalization is possible for SVM solutions beyond the realm in which typical margin-based bounds apply.
Author Information
Ke Wang (University of California, Santa Barbara)
Vidya Muthukumar (Georgia Institute of Technology)
Christos Thrampoulidis (University of British Columbia)
More from the Same Authors
-
2022 : On the Implicit Geometry of Cross-Entropy Parameterizations for Label-Imbalanced Data »
Tina Behnia · Ganesh Ramachandra Kini · Vala Vakilian · Christos Thrampoulidis -
2022 : Generalization of Decentralized Gradient Descent with Separable Data »
Hossein Taheri · Christos Thrampoulidis -
2022 : Fast Convergence of Random Reshuffling under Interpolation and the Polyak-Łojasiewicz Condition »
Chen Fan · Christos Thrampoulidis · Mark Schmidt -
2022 : Dropout Disagreement: A Recipe for Group Robustness with Fewer Annotations »
Tyler LaBonte · Abhishek Kumar · Vidya Muthukumar -
2023 Poster: Faster Margin Maximization Rates for Generic Optimization Methods »
Guanghui Wang · Zihao Hu · Vidya Muthukumar · Jacob Abernethy -
2023 Poster: Towards Last-Layer Retraining for Group Robustness with Fewer Annotations »
Tyler LaBonte · Vidya Muthukumar · Abhishek Kumar -
2023 Poster: BiSLS/SPS: Auto-tune Step Sizes for Stable Bi-level Optimization »
Chen Fan · Gaspard Choné-Ducasse · Mark Schmidt · Christos Thrampoulidis -
2023 Tutorial: Reconsidering Overfitting in the Age of Overparameterized Models »
Spencer Frei · Vidya Muthukumar -
2022 Poster: Adaptive Oracle-Efficient Online Learning »
Guanghui Wang · Zihao Hu · Vidya Muthukumar · Jacob Abernethy -
2022 Poster: Imbalance Trouble: Revisiting Neural-Collapse Geometry »
Christos Thrampoulidis · Ganesh Ramachandra Kini · Vala Vakilian · Tina Behnia -
2022 Poster: Mirror Descent Maximizes Generalized Margin and Can Be Implemented Efficiently »
Haoyuan Sun · Kwangjun Ahn · Christos Thrampoulidis · Navid Azizan -
2021 Poster: AutoBalance: Optimized Loss Functions for Imbalanced Data »
Mingchen Li · Xuechen Zhang · Christos Thrampoulidis · Jiasi Chen · Samet Oymak -
2021 Poster: UCB-based Algorithms for Multinomial Logistic Regression Bandits »
Sanae Amani · Christos Thrampoulidis -
2021 Poster: Label-Imbalanced and Group-Sensitive Classification under Overparameterization »
Ganesh Ramachandra Kini · Orestis Paraskevas · Samet Oymak · Christos Thrampoulidis -
2020 Poster: Theoretical Insights Into Multiclass Classification: A High-dimensional Asymptotic View »
Christos Thrampoulidis · Samet Oymak · Mahdi Soltanolkotabi -
2020 Poster: Stage-wise Conservative Linear Bandits »
Ahmadreza Moradipari · Christos Thrampoulidis · Mahnoosh Alizadeh