Timezone: »

Unsupervised Domain Adaptation with Adapter
Rongsheng Zhang · Yinhe Zheng · Xiaoxi Mao · Minlie Huang

Unsupervised domain adaptation (UDA) with pre-trained language models (LM) has achieved promising results since these pre-trained models embed generic knowledge learned from various domains. However, full fine-tuning of the LM for UDA may lead to learned knowledge being distorted, and the full fine-tuned LM is also expensive for deployment. This paper explores an adapter-based fine-tuning approach for unsupervised domain adaptation. Specifically, several trainable adapter modules are inserted in a pre-trained LM, and the embedded generic knowledge is preserved by fixing the parameters of the original LM at fine-tuning. A domain-fusion scheme is introduced to train these adapters using a corpus from mixed domains to capture transferable features better. Elaborated experiments on two benchmark datasets are carried out, and the results demonstrate that our approach is effective with different tasks, dataset sizes, and domain similarities.

Author Information

Rongsheng Zhang (Fuxi AI Lab, Netease Inc.)
Yinhe Zheng (Samsung Research China – Beijing (SRC-B))
Xiaoxi Mao (Fuxi AI Lab, Netease Inc.)
Minlie Huang (Tsinghua University)

More from the Same Authors