Timezone: »
Precisely annotating objects with multiple labels is costly and has become a critical bottleneck in real-world multi-label classification tasks. Instead, deciding the relative order of label pairs is obviously less laborious than collecting exact labels. However, the supervised information of pairwise relevance ordering is less informative than exact labels. It is thus an important challenge to effectively learn with such weak supervision. In this paper, we formalize this problem as a novel learning framework, called multi-label learning with pairwise relevance ordering (PRO). We show that the unbiased estimator of classification risk can be derived with a cost-sensitive loss only from PRO examples. Theoretically, we provide the estimation error bound for the proposed estimator and further prove that it is consistent with respective to the commonly used ranking loss. Empirical studies on multiple datasets and metrics validate the effectiveness of the proposed method.
Author Information
Ming-Kun Xie (Nanjing University of Aeronautics and Astronautics)
Sheng-Jun Huang (Nanjing University of Aeronautics and Astronautics)
More from the Same Authors
-
2022 Poster: Can Adversarial Training Be Manipulated By Non-Robust Features? »
Lue Tao · Lei Feng · Hongxin Wei · Jinfeng Yi · Sheng-Jun Huang · Songcan Chen -
2022 Poster: Active Learning for Multiple Target Models »
Ying-Peng Tang · Sheng-Jun Huang -
2023 Poster: Class-Distribution-Aware Pseudo-Labeling for Semi-Supervised Multi-Label Learning »
Ming-Kun Xie · Jiahao Xiao · Hao-Zhe Liu · Gang Niu · Masashi Sugiyama · Sheng-Jun Huang -
2022 Spotlight: Lightning Talks 2A-2 »
Harikrishnan N B · Jianhao Ding · Juha Harviainen · Yizhen Wang · Lue Tao · Oren Mangoubi · Tong Bu · Nisheeth Vishnoi · Mohannad Alhanahnah · Mikko Koivisto · Aditi Kathpalia · Lei Feng · Nithin Nagaraj · Hongxin Wei · Xiaozhu Meng · Petteri Kaski · Zhaofei Yu · Tiejun Huang · Ke Wang · Jinfeng Yi · Jian Liu · Sheng-Jun Huang · Mihai Christodorescu · Songcan Chen · Somesh Jha -
2022 Spotlight: Can Adversarial Training Be Manipulated By Non-Robust Features? »
Lue Tao · Lei Feng · Hongxin Wei · Jinfeng Yi · Sheng-Jun Huang · Songcan Chen -
2022 Poster: Label-Aware Global Consistency for Multi-Label Learning with Single Positive Labels »
Ming-Kun Xie · Jiahao Xiao · Sheng-Jun Huang -
2021 Poster: Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training »
Lue Tao · Lei Feng · Jinfeng Yi · Sheng-Jun Huang · Songcan Chen