Timezone: »

Transfer Learning for Natural Language Processing
Alon Albalak · Colin Raffel · Chunting Zhou · Deepak Ramachandran · Xuezhe Ma · Sebastian Ruder

Sat Dec 03 06:50 AM -- 03:00 PM (PST) @ Theater C
Event URL: https://tl4nlp.github.io/ »

Transfer learning from large pre-trained language models (PLM) has become the de-facto method for a wide range of natural language processing tasks. Current transfer learning methods, combined with PLMs, have seen outstanding successes in transferring knowledge to new tasks, domains, and even languages. However, existing methods, including fine-tuning, in-context learning, parameter-efficient tuning, semi-parametric models with knowledge augmentation, etc., still lack consistently good performance across different tasks, domains, varying sizes of data resources, and diverse textual inputs.

This workshop aims to invite researchers from different backgrounds to share their latest work in efficient and robust transfer learning methods, discuss challenges and risks of transfer learning models when deployed in the wild, understand positive and negative transfer, and also debate over future directions.

Author Information

Alon Albalak (University of California, Santa Barbara)
Colin Raffel (UNC Chapel Hill and Hugging Face)
Chunting Zhou (FAIR)
Deepak Ramachandran (Google)
Xuezhe Ma (University of Southern California)
Sebastian Ruder (Google Research)

More from the Same Authors