Timezone: »
We describe and explore a novel setting of active learning (AL), where there are multiple target models to be learned simultaneously. In many real applications, the machine learning system is required to be deployed on diverse devices with varying computational resources (e.g., workstation, mobile phone, edge devices, etc.), which leads to the demand of training multiple target models on the same labeled dataset. However, it is generally believed that AL is model-dependent and untransferable, i.e., the data queried by one model may be less effective for training another model. This phenomenon naturally raises a question "Does there exist an AL method that is effective for multiple target models?" In this paper, we answer this question by theoretically analyzing the label complexity of active and passive learning under the setting with multiple target models, and conclude that AL does have potential to achieve better label complexity under this novel setting. Based on this insight, we further propose an agnostic AL sampling strategy to select the examples located in the joint disagreement regions of different target models. The experimental results on the OCR benchmarks show that the proposed method can significantly surpass the traditional active and passive learning methods under this challenging setting.
Author Information
Ying-Peng Tang (Nanjing University of Aeronautics and Astronautics)
Sheng-Jun Huang (Nanjing University of Aeronautics and Astronautics)
More from the Same Authors
-
2022 Poster: Can Adversarial Training Be Manipulated By Non-Robust Features? »
Lue Tao · Lei Feng · Hongxin Wei · Jinfeng Yi · Sheng-Jun Huang · Songcan Chen -
2022 Spotlight: Lightning Talks 2A-2 »
Harikrishnan N B · Jianhao Ding · Juha Harviainen · Yizhen Wang · Lue Tao · Oren Mangoubi · Tong Bu · Nisheeth Vishnoi · Mohannad Alhanahnah · Mikko Koivisto · Aditi Kathpalia · Lei Feng · Nithin Nagaraj · Hongxin Wei · Xiaozhu Meng · Petteri Kaski · Zhaofei Yu · Tiejun Huang · Ke Wang · Jinfeng Yi · Jian Liu · Sheng-Jun Huang · Mihai Christodorescu · Songcan Chen · Somesh Jha -
2022 Spotlight: Can Adversarial Training Be Manipulated By Non-Robust Features? »
Lue Tao · Lei Feng · Hongxin Wei · Jinfeng Yi · Sheng-Jun Huang · Songcan Chen -
2022 Poster: Label-Aware Global Consistency for Multi-Label Learning with Single Positive Labels »
Ming-Kun Xie · Jiahao Xiao · Sheng-Jun Huang -
2021 Poster: Multi-Label Learning with Pairwise Relevance Ordering »
Ming-Kun Xie · Sheng-Jun Huang -
2021 Poster: Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training »
Lue Tao · Lei Feng · Jinfeng Yi · Sheng-Jun Huang · Songcan Chen