Timezone: »
In few-shot domain adaptation (FDA), classifiers for the target domain are trained with \emph{accessible} labeled data in the source domain (SD) and few labeled data in the target domain (TD). However, data usually contain private information in the current era, e.g., data distributed on personal phones. Thus, the private data will be leaked if we directly access data in SD to train a target-domain classifier (required by FDA methods). In this paper, to prevent privacy leakage in SD, we consider a very challenging problem setting, where the classifier for the TD has to be trained using few labeled target data and a well-trained SD classifier, named few-shot hypothesis adaptation (FHA). In FHA, we cannot access data in SD, as a result, the private information in SD will be protected well. To this end, we propose a target-oriented hypothesis adaptation network (TOHAN) to solve the FHA problem, where we generate highly-compatible unlabeled data (i.e., an intermediate domain) to help train a target-domain classifier. TOHAN maintains two deep networks simultaneously, in which one focuses on learning an intermediate domain and the other takes care of the intermediate-to-target distributional adaptation and the target-risk minimization. Experimental results show that TOHAN outperforms competitive baselines significantly.
Author Information
Haoang Chi (NUDT)
Feng Liu (University of Technology Sydney)
Wenjing Yang (National University of Defense Technology)
Long Lan (National University of Defense Technology, Tsinghua University)
Tongliang Liu (The University of Sydney)
Bo Han (HKBU / RIKEN)
William Cheung (Hong Kong Baptist University)
James Kwok (Hong Kong University of Science and Technology)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation »
Dates n/a. Room None
More from the Same Authors
-
2021 Poster: Understanding and Improving Early Stopping for Learning with Noisy Labels »
Yingbin Bai · Erkun Yang · Bo Han · Yanhua Yang · Jiatong Li · Yinian Mao · Gang Niu · Tongliang Liu -
2021 Poster: Effective Meta-Regularization by Kernelized Proximal Regularization »
Weisen Jiang · James Kwok · Yu Zhang -
2021 Poster: Graph Adversarial Self-Supervised Learning »
Longqi Yang · Liangliang Zhang · Wenjing Yang -
2021 Poster: Universal Semi-Supervised Learning »
Zhuo Huang · Chao Xue · Bo Han · Jian Yang · Chen Gong -
2021 Poster: Probabilistic Margins for Instance Reweighting in Adversarial Training »
qizhou wang · Feng Liu · Bo Han · Tongliang Liu · Chen Gong · Gang Niu · Mingyuan Zhou · Masashi Sugiyama -
2021 Poster: Meta Two-Sample Testing: Learning Kernels for Testing with Limited Data »
Feng Liu · Wenkai Xu · Jie Lu · Danica J. Sutherland -
2021 Poster: Instance-dependent Label-noise Learning under a Structural Causal Model »
Yu Yao · Tongliang Liu · Mingming Gong · Bo Han · Gang Niu · Kun Zhang -
2021 Poster: Confident Anchor-Induced Multi-Source Free Domain Adaptation »
Jiahua Dong · Zhen Fang · Anjin Liu · Gan Sun · Tongliang Liu -
2020 Poster: Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning »
Yu Yao · Tongliang Liu · Bo Han · Mingming Gong · Jiankang Deng · Gang Niu · Masashi Sugiyama -
2020 Poster: Part-dependent Label Noise: Towards Instance-dependent Label Noise »
Xiaobo Xia · Tongliang Liu · Bo Han · Nannan Wang · Mingming Gong · Haifeng Liu · Gang Niu · Dacheng Tao · Masashi Sugiyama -
2020 Spotlight: Part-dependent Label Noise: Towards Instance-dependent Label Noise »
Xiaobo Xia · Tongliang Liu · Bo Han · Nannan Wang · Mingming Gong · Haifeng Liu · Gang Niu · Dacheng Tao · Masashi Sugiyama -
2020 Poster: Timeseries Anomaly Detection using Temporal Hierarchical One-Class Network »
Lifeng Shen · Zhuocong Li · James Kwok -
2020 Poster: Bridging the Gap between Sample-based and One-shot Neural Architecture Search with BONAS »
Han Shi · Renjie Pi · Hang Xu · Zhenguo Li · James Kwok · Tong Zhang -
2020 Poster: Domain Generalization via Entropy Regularization »
Shanshan Zhao · Mingming Gong · Tongliang Liu · Huan Fu · Dacheng Tao -
2019 Poster: Communication-Efficient Distributed Blockwise Momentum SGD with Error-Feedback »
Shuai Zheng · Ziyue Huang · James Kwok -
2019 Poster: Are Anchor Points Really Indispensable in Label-Noise Learning? »
Xiaobo Xia · Tongliang Liu · Nannan Wang · Bo Han · Chen Gong · Gang Niu · Masashi Sugiyama -
2019 Poster: Normalization Helps Training of Quantized LSTM »
Lu Hou · Jinhua Zhu · James Kwok · Fei Gao · Tao Qin · Tie-Yan Liu -
2019 Poster: Control Batch Size and Learning Rate to Generalize Well: Theoretical and Empirical Evidence »
Fengxiang He · Tongliang Liu · Dacheng Tao -
2018 Poster: Scalable Robust Matrix Factorization with Nonconvex Loss »
Quanming Yao · James Kwok -
2015 Poster: Fast Second Order Stochastic Backpropagation for Variational Inference »
Kai Fan · Ziteng Wang · Jeff Beck · James Kwok · Katherine Heller -
2012 Poster: Mandatory Leaf Node Prediction in Hierarchical Multilabel Classification »
Wei Bi · James Kwok -
2009 Poster: Accelerated Gradient Methods for Stochastic Optimization and Online Learning »
Chonghai Hu · James Kwok · Weike Pan