Transfer learning from large pre-trained language models (PLM) has become the de-facto method for a wide range of natural language processing tasks. Current transfer learning methods, combined with PLMs, have seen outstanding successes in transferring knowledge to new tasks, domains, and even languages. However, existing methods, including fine-tuning, in-context learning, parameter-efficient tuning, semi-parametric models with knowledge augmentation, etc., still lack consistently good performance across different tasks, domains, varying sizes of data resources, and diverse textual inputs.
This workshop aims to invite researchers from different backgrounds to share their latest work in efficient and robust transfer learning methods, discuss challenges and risks of transfer learning models when deployed in the wild, understand positive and negative transfer, and also debate over future directions.
Live content is unavailable. Log in and register to view live content