Timezone: »

Uncertainty-aware Self-training for Few-shot Text Classification
Subhabrata Mukherjee · Ahmed Awadallah

Wed Dec 09 07:30 AM -- 07:40 AM (PST) @ Orals & Spotlights: Continual/Meta/Misc Learning

Recent success of pre-trained language models crucially hinges on fine-tuning them on large amounts of labeled data for the downstream task, that are typically expensive to acquire or difficult to access for many applications. We study self-training as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck by making use of large-scale unlabeled data for the target task. Standard self-training mechanism randomly samples instances from the unlabeled pool to generate pseudo-labels and augment labeled data. We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network leveraging recent advances in Bayesian deep learning. Specifically, we propose (i) acquisition functions to select instances from the unlabeled pool leveraging Monte Carlo (MC) Dropout, and (ii) learning mechanism leveraging model confidence for self-training. As an application, we focus on text classification with five benchmark datasets. We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation perform within 3% of fully supervised pre-trained language models fine-tuned on thousands of labels with an aggregate accuracy of 91% and improvement of up to 12% over baselines.

Author Information

Subhabrata Mukherjee (Microsoft Research)

Principal Researcher at Microsoft Research leading cross-org initiative for [Efficient AI at Scale]. Our focus is on efficient learning of massive neural networks for both model (e.g., neural architecture search, model compression, sparse and modular learning) and data efficiency (e.g., zero-shot and few-shot learning, semi-supervised learning). We develop state-of-the-art computationally efficient models and techniques to enable AI practitioners, researchers and engineers to use large-scale models in practice. Our technologies have been deployed in several enterprise scenarios including Turing, Bing and Microsoft 365. Honors: 2022 MIT Technology Review Innovators under 35 Semi-finalist (listed in 100 innovators under 35 world-wide) for work on Efficient AI.

Ahmed Awadallah (Microsoft)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors