Timezone: »
Unsupervised domain adaptation has attracted appealing academic attentions by transferring knowledge from labeled source domain to unlabeled target domain. However, most existing methods assume the source data are drawn from a single domain, which cannot be successfully applied to explore complementarily transferable knowledge from multiple source domains with large distribution discrepancies. Moreover, they require access to source data during training, which are inefficient and unpractical due to privacy preservation and memory storage. To address these challenges, we develop a novel Confident-Anchor-induced multi-source-free Domain Adaptation (CAiDA) model, which is a pioneer exploration of knowledge adaptation from multiple source domains to the unlabeled target domain without any source data, but with only pre-trained source models. Specifically, a source-specific transferable perception module is proposed to automatically quantify the contributions of the complementary knowledge transferred from multi-source domains to the target domain. To generate pseudo labels for the target domain without access to the source data, we develop a confident-anchor-induced pseudo label generator by constructing a confident anchor group and assigning each unconfident target sample with a semantic-nearest confident anchor. Furthermore, a class-relationship-aware consistency loss is proposed to preserve consistent inter-class relationships by aligning soft confusion matrices across domains. Theoretical analysis answers why multi-source domains are better than a single source domain, and establishes a novel learning bound to show the effectiveness of exploiting multi-source domains. Experiments on several representative datasets illustrate the superiority of our proposed CAiDA model. The code is available at https://github.com/Learning-group123/CAiDA.
Author Information
Jiahua Dong (Shenyang Institute of Automation, Chinese Academy of Sciences)
Zhen Fang (University of Technology Sydney)
Anjin Liu (University of Technology Sydney)
Gan Sun (Chinese Academy of Sciences)
Tongliang Liu (The University of Sydney)
More from the Same Authors
-
2021 Spotlight: TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation »
Haoang Chi · Feng Liu · Wenjing Yang · Long Lan · Tongliang Liu · Bo Han · William Cheung · James Kwok -
2021 Poster: Understanding and Improving Early Stopping for Learning with Noisy Labels »
Yingbin Bai · Erkun Yang · Bo Han · Yanhua Yang · Jiatong Li · Yinian Mao · Gang Niu · Tongliang Liu -
2021 Poster: Probabilistic Margins for Instance Reweighting in Adversarial Training »
qizhou wang · Feng Liu · Bo Han · Tongliang Liu · Chen Gong · Gang Niu · Mingyuan Zhou · Masashi Sugiyama -
2021 Poster: Instance-dependent Label-noise Learning under a Structural Causal Model »
Yu Yao · Tongliang Liu · Mingming Gong · Bo Han · Gang Niu · Kun Zhang -
2021 Poster: TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation »
Haoang Chi · Feng Liu · Wenjing Yang · Long Lan · Tongliang Liu · Bo Han · William Cheung · James Kwok -
2020 Poster: Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning »
Yu Yao · Tongliang Liu · Bo Han · Mingming Gong · Jiankang Deng · Gang Niu · Masashi Sugiyama -
2020 Poster: Part-dependent Label Noise: Towards Instance-dependent Label Noise »
Xiaobo Xia · Tongliang Liu · Bo Han · Nannan Wang · Mingming Gong · Haifeng Liu · Gang Niu · Dacheng Tao · Masashi Sugiyama -
2020 Spotlight: Part-dependent Label Noise: Towards Instance-dependent Label Noise »
Xiaobo Xia · Tongliang Liu · Bo Han · Nannan Wang · Mingming Gong · Haifeng Liu · Gang Niu · Dacheng Tao · Masashi Sugiyama -
2020 Poster: Domain Generalization via Entropy Regularization »
Shanshan Zhao · Mingming Gong · Tongliang Liu · Huan Fu · Dacheng Tao -
2019 Poster: Are Anchor Points Really Indispensable in Label-Noise Learning? »
Xiaobo Xia · Tongliang Liu · Nannan Wang · Bo Han · Chen Gong · Gang Niu · Masashi Sugiyama -
2019 Poster: Control Batch Size and Learning Rate to Generalize Well: Theoretical and Empirical Evidence »
Fengxiang He · Tongliang Liu · Dacheng Tao