Timezone: »
Unsupervised domain adaptation (UDA) with pre-trained language models (LM) has achieved promising results since these pre-trained models embed generic knowledge learned from various domains. However, full fine-tuning of the LM for UDA may lead to learned knowledge being distorted, and the full fine-tuned LM is also expensive for deployment. This paper explores an adapter-based fine-tuning approach for unsupervised domain adaptation. Specifically, several trainable adapter modules are inserted in a pre-trained LM, and the embedded generic knowledge is preserved by fixing the parameters of the original LM at fine-tuning. A domain-fusion scheme is introduced to train these adapters using a corpus from mixed domains to capture transferable features better. Elaborated experiments on two benchmark datasets are carried out, and the results demonstrate that our approach is effective with different tasks, dataset sizes, and domain similarities.
Author Information
Rongsheng Zhang (Fuxi AI Lab, Netease Inc.)
Yinhe Zheng (Samsung Research China – Beijing (SRC-B))
Xiaoxi Mao (Fuxi AI Lab, Netease Inc.)
Minlie Huang (Tsinghua University)
More from the Same Authors
-
2020 Poster: Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond »
Kaidi Xu · Zhouxing Shi · Huan Zhang · Yihan Wang · Kai-Wei Chang · Minlie Huang · Bhavya Kailkhura · Xue Lin · Cho-Jui Hsieh -
2020 Poster: Reinforced Molecular Optimization with Neighborhood-Controlled Grammars »
Chencheng Xu · Qiao Liu · Minlie Huang · Tao Jiang -
2019 : Poster lighting round »
Yinhe Zheng · Anders Søgaard · Abdelrhman Saleh · Youngsoo Jang · Hongyu Gong · Omar U. Florez · Margaret Li · Andrea Madotto · The Tung Nguyen · Ilia Kulikov · Arash einolghozati · Yiru Wang · Mihail Eric · Victor Petrén Bach Hansen · Nurul Lubis · Yen-Chen Wu