Timezone: »
A Unified DRO View of Multi-class Loss Functions with top-N Consistency
Dixian Zhu · Tianbao Yang
Event URL: https://openreview.net/forum?id=F_uhznd5dH_ »
Multi-class classification is one of the most common tasks in machine learning applications, where data is labeled by one of many class labels. Many loss functions have been proposed for multi-class classification including two well-known ones, namely the cross-entropy (CE) loss and the crammer-singer (CS) loss (aka. the SVM loss). While CS loss has been used widely for traditional machine learning tasks for structured data, CE loss is usually a better choice (the default choice) for multi-class deep learning tasks. There are also top-$k$ variants of CS loss and CE loss that are proposed to promote the learning of a classifier for achieving better top-$k$ accuracy. Nevertheless, it still remains unclear the relationship between these different losses, which hinders our understanding of their expectations in different scenarios. In this paper, we present a unified view of the CS/CE losses and their smoothed top-$k$ variants by proposing a new family of loss functions, which are arguably better than the CS/CE losses when the given label information is incomplete and noisy. The new family of smooth loss functions named {label-distributionally robust (LDR) loss} is defined by leveraging the distributionally robust optimization (DRO) framework to model the uncertainty in the given label information, where the uncertainty over true class labels is captured by using distributional weights for each label regularized by a function. We have two observations: (i) the CS and the CE loss are just two special cases of the LDR loss by choosing two particular values for the involved regularization parameter; hence the new LDR loss provides an interpolation between the CS loss and the CE loss, and also induces new variants; (ii) the smoothed top-$k$ losses are also special cases of the LDR loss by regularizing the involved uncertainty variables into a bounded ball. Theoretically, we establish the top-$N$ consistency (for any $N\geq 1$) of the proposed LDR loss, which is not only consistent with existing consistenty results for the CS and the CE loss but also addresses some open problems regarding the consistency of top-$k$ SVM losses. % However, in many real-world applications (e.g., natural image classification), data is often inherently multi-label, which renders the given information incomplete and noisy. Hence, overfitting to the given annotations by deep neural networks with high capacity could harm the generalization performance. To tackle this issue, this paper proposes a novel {\bf label-distributionally robust method} (named LDR), where the uncertainty over true class labels is captured by a regularized distributionally robust optimization framework. Interestingly, this LDR loss family include many existing loss functions as special/extreme cases, e.g., cross-entropy (CE) loss, crammer-singer (CS) loss, but can avoid the defects of CS loss and enjoy more flexibility than CE loss by varying the regularization strength on the distributional weight (DW) variables. Furthermore, we proposed an variant version for LDR that specializes in top-$k$ classification named LDR-$k$, for which we develop a novel efficient analytical solution. Of independent interest, we prove both LDR and LDR-$k$ loss family is calibrated and hence Fisher consistent for a broad family of DW regularization functions. Empirically, we provide some experimental results on synthetic data and real-world benchmark data to validate the effectiveness of the new variants of LDR loss.
Multi-class classification is one of the most common tasks in machine learning applications, where data is labeled by one of many class labels. Many loss functions have been proposed for multi-class classification including two well-known ones, namely the cross-entropy (CE) loss and the crammer-singer (CS) loss (aka. the SVM loss). While CS loss has been used widely for traditional machine learning tasks for structured data, CE loss is usually a better choice (the default choice) for multi-class deep learning tasks. There are also top-$k$ variants of CS loss and CE loss that are proposed to promote the learning of a classifier for achieving better top-$k$ accuracy. Nevertheless, it still remains unclear the relationship between these different losses, which hinders our understanding of their expectations in different scenarios. In this paper, we present a unified view of the CS/CE losses and their smoothed top-$k$ variants by proposing a new family of loss functions, which are arguably better than the CS/CE losses when the given label information is incomplete and noisy. The new family of smooth loss functions named {label-distributionally robust (LDR) loss} is defined by leveraging the distributionally robust optimization (DRO) framework to model the uncertainty in the given label information, where the uncertainty over true class labels is captured by using distributional weights for each label regularized by a function. We have two observations: (i) the CS and the CE loss are just two special cases of the LDR loss by choosing two particular values for the involved regularization parameter; hence the new LDR loss provides an interpolation between the CS loss and the CE loss, and also induces new variants; (ii) the smoothed top-$k$ losses are also special cases of the LDR loss by regularizing the involved uncertainty variables into a bounded ball. Theoretically, we establish the top-$N$ consistency (for any $N\geq 1$) of the proposed LDR loss, which is not only consistent with existing consistenty results for the CS and the CE loss but also addresses some open problems regarding the consistency of top-$k$ SVM losses. % However, in many real-world applications (e.g., natural image classification), data is often inherently multi-label, which renders the given information incomplete and noisy. Hence, overfitting to the given annotations by deep neural networks with high capacity could harm the generalization performance. To tackle this issue, this paper proposes a novel {\bf label-distributionally robust method} (named LDR), where the uncertainty over true class labels is captured by a regularized distributionally robust optimization framework. Interestingly, this LDR loss family include many existing loss functions as special/extreme cases, e.g., cross-entropy (CE) loss, crammer-singer (CS) loss, but can avoid the defects of CS loss and enjoy more flexibility than CE loss by varying the regularization strength on the distributional weight (DW) variables. Furthermore, we proposed an variant version for LDR that specializes in top-$k$ classification named LDR-$k$, for which we develop a novel efficient analytical solution. Of independent interest, we prove both LDR and LDR-$k$ loss family is calibrated and hence Fisher consistent for a broad family of DW regularization functions. Empirically, we provide some experimental results on synthetic data and real-world benchmark data to validate the effectiveness of the new variants of LDR loss.
Author Information
Dixian Zhu (, University of Iowa)
Tianbao Yang (The University of Iowa)
More from the Same Authors
-
2021 : Practice-Consistent Analysis of Adam-Style Methods »
Zhishuai Guo · Yi Xu · Wotao Yin · Rong Jin · Tianbao Yang -
2021 : A Stochastic Momentum Method for Min-max Bilevel Optimization »
Quanqi Hu · Bokun Wang · Tianbao Yang -
2021 : Deep AUC Maximization for Medical Image Classification: Challenges and Opportunities »
Tianbao Yang -
2023 Poster: Maximization of Average Precision for Deep Learning with Adversarial Ranking Robustness »
Gang Li · Wei Tong · Tianbao Yang -
2023 Poster: Federated Compositional Deep AUC Maximization »
Xinwen Zhang · Yihan Zhang · Tianbao Yang · Richard Souvenir · Hongchang Gao -
2023 Poster: Non-Smooth Weakly-Convex Finite-sum Coupled Compositional Optimization »
Quanqi Hu · Dixian Zhu · Tianbao Yang -
2023 Poster: SpatialRank: Urban Event Ranking with NDCG Optimization on Spatiotemporal Data »
BANG AN · Xun Zhou · YONGJIAN ZHONG · Tianbao Yang -
2023 Poster: Stochastic Approximation Approaches to Group Distributionally Robust Optimization »
Lijun Zhang · Peng Zhao · Tianbao Yang · Zhi-Hua Zhou -
2022 Spotlight: Multi-block-Single-probe Variance Reduced Estimator for Coupled Compositional Optimization »
Wei Jiang · Gang Li · Yibo Wang · Lijun Zhang · Tianbao Yang -
2022 Spotlight: Lightning Talks 6B-1 »
Yushun Zhang · Duc Nguyen · Jiancong Xiao · Wei Jiang · Yaohua Wang · Yilun Xu · Zhen LI · Anderson Ye Zhang · Ziming Liu · Fangyi Zhang · Gilles Stoltz · Congliang Chen · Gang Li · Yanbo Fan · Ruoyu Sun · Naichen Shi · Yibo Wang · Ming Lin · Max Tegmark · Lijun Zhang · Jue Wang · Ruoyu Sun · Tommi Jaakkola · Senzhang Wang · Zhi-Quan Luo · Xiuyu Sun · Zhi-Quan Luo · Tianbao Yang · Rong Jin -
2022 Spotlight: Lightning Talks 4A-2 »
Barakeel Fanseu Kamhoua · Hualin Zhang · Taiki Miyagawa · Tomoya Murata · Xin Lyu · Yan Dai · Elena Grigorescu · Zhipeng Tu · Lijun Zhang · Taiji Suzuki · Wei Jiang · Haipeng Luo · Lin Zhang · Xi Wang · Young-San Lin · Huan Xiong · Liyu Chen · Bin Gu · Jinfeng Yi · Yongqiang Chen · Sandeep Silwal · Yiguang Hong · Maoyuan Song · Lei Wang · Tianbao Yang · Han Yang · MA Kaili · Samson Zhou · Deming Yuan · Bo Han · Guodong Shi · Bo Li · James Cheng -
2022 Spotlight: Smoothed Online Convex Optimization Based on Discounted-Normal-Predictor »
Lijun Zhang · Wei Jiang · Jinfeng Yi · Tianbao Yang -
2022 Poster: Multi-block Min-max Bilevel Optimization with Applications in Multi-task Deep AUC Maximization »
Quanqi Hu · YONGJIAN ZHONG · Tianbao Yang -
2022 Poster: Large-scale Optimization of Partial AUC in a Range of False Positive Rates »
Yao Yao · Qihang Lin · Tianbao Yang -
2022 Poster: Smoothed Online Convex Optimization Based on Discounted-Normal-Predictor »
Lijun Zhang · Wei Jiang · Jinfeng Yi · Tianbao Yang -
2022 Poster: Multi-block-Single-probe Variance Reduced Estimator for Coupled Compositional Optimization »
Wei Jiang · Gang Li · Yibo Wang · Lijun Zhang · Tianbao Yang -
2021 Poster: Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning »
ZHENHUAN YANG · Yunwen Lei · Puyu Wang · Tianbao Yang · Yiming Ying -
2021 Poster: Revisiting Smoothed Online Learning »
Lijun Zhang · Wei Jiang · Shiyin Lu · Tianbao Yang -
2021 Poster: Stochastic Optimization of Areas Under Precision-Recall Curves with Provable Convergence »
Qi Qi · Youzhi Luo · Zhao Xu · Shuiwang Ji · Tianbao Yang -
2021 Poster: Online Convex Optimization with Continuous Switching Constraint »
Guanghui Wang · Yuanyu Wan · Tianbao Yang · Lijun Zhang -
2021 Poster: An Online Method for A Class of Distributionally Robust Optimization with Non-convex Objectives »
Qi Qi · Zhishuai Guo · Yi Xu · Rong Jin · Tianbao Yang -
2020 Poster: Improved Schemes for Episodic Memory-based Lifelong Learning »
Yunhui Guo · Mingrui Liu · Tianbao Yang · Tajana S Rosing -
2020 Spotlight: Improved Schemes for Episodic Memory-based Lifelong Learning »
Yunhui Guo · Mingrui Liu · Tianbao Yang · Tajana S Rosing -
2020 Poster: A Decentralized Parallel Algorithm for Training Generative Adversarial Nets »
Mingrui Liu · Wei Zhang · Youssef Mroueh · Xiaodong Cui · Jarret Ross · Tianbao Yang · Payel Das -
2020 Poster: Optimal Epoch Stochastic Gradient Descent Ascent Methods for Min-Max Optimization »
Yan Yan · Yi Xu · Qihang Lin · Wei Liu · Tianbao Yang -
2019 Poster: Non-asymptotic Analysis of Stochastic Methods for Non-Smooth Non-Convex Regularized Problems »
Yi Xu · Rong Jin · Tianbao Yang -
2019 Poster: Stagewise Training Accelerates Convergence of Testing Error Over SGD »
Zhuoning Yuan · Yan Yan · Rong Jin · Tianbao Yang -
2018 : Poster spotlight »
Tianbao Yang · Pavel Dvurechenskii · Panayotis Mertikopoulos · Hugo Berard -
2018 Poster: First-order Stochastic Algorithms for Escaping From Saddle Points in Almost Linear Time »
Yi Xu · Rong Jin · Tianbao Yang -
2018 Poster: Adaptive Negative Curvature Descent with Applications in Non-convex Optimization »
Mingrui Liu · Zhe Li · Xiaoyu Wang · Jinfeng Yi · Tianbao Yang -
2018 Poster: Faster Online Learning of Optimal Threshold for Consistent F-measure Optimization »
Xiaoxuan Zhang · Mingrui Liu · Xun Zhou · Tianbao Yang -
2018 Poster: Fast Rates of ERM and Stochastic Approximation: Adaptive to Error Bound Conditions »
Mingrui Liu · Xiaoxuan Zhang · Lijun Zhang · Rong Jin · Tianbao Yang -
2017 Poster: ADMM without a Fixed Penalty Parameter: Faster Convergence with New Adaptive Penalization »
Yi Xu · Mingrui Liu · Qihang Lin · Tianbao Yang -
2017 Poster: Improved Dynamic Regret for Non-degenerate Functions »
Lijun Zhang · Tianbao Yang · Jinfeng Yi · Rong Jin · Zhi-Hua Zhou -
2017 Poster: Adaptive Accelerated Gradient Converging Method under H\"{o}lderian Error Bound Condition »
Mingrui Liu · Tianbao Yang -
2017 Poster: Adaptive SVRG Methods under Error Bound Conditions with Unknown Growth Parameter »
Yi Xu · Qihang Lin · Tianbao Yang -
2016 Poster: Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than $O(1/\epsilon)$ »
Yi Xu · Yan Yan · Qihang Lin · Tianbao Yang -
2016 Poster: Improved Dropout for Shallow and Deep Learning »
Zhe Li · Boqing Gong · Tianbao Yang