Timezone: »

 
Cutting Down on Prompts and Parameters:Simple Few-Shot Learning with Language Models
Robert Logan · Ivana Balazevic · Eric Wallace · Fabio Petroni · Sameer Singh · Sebastian Riedel

Prompting language models (LMs) with training examples and task descriptions has been seen as critical to recent successes in few-shot learning. In this work, we show that finetuning LMs in the few-shot setting can considerably reduce the need for prompt engineering. In fact, one can use null prompts, prompts that contain neither task-specific templates nor training examples, and achieve competitive accuracy to manually-tuned prompts across a wide range of tasks. While finetuning LMs does introduce new parameters for each downstream task, we show that this memory overhead can be substantially reduced: finetuning only the bias terms can achieve comparable or better accuracy than standard finetuning while only updating 0.1% of the parameters. All in all, we recommend finetuning LMs for few-shot learning as it is more accurate, robust to different prompts, and can be made nearly as efficient as using frozen LMs.

Author Information

Robert Logan (University of California, Irvine)
Ivana Balazevic (DeepMind)
Eric Wallace (U.C. Berkeley)
Fabio Petroni (Facebook AI Research)
Sameer Singh (University of California, Irvine)

Sameer Singh is an Assistant Professor at UC Irvine working on robustness and interpretability of machine learning. Sameer has presented tutorials and invited workshop talks at EMNLP, Neurips, NAACL, WSDM, ICLR, ACL, and AAAI, and received paper awards at KDD 2016, ACL 2018, EMNLP 2019, AKBC 2020, and ACL 2020. Website: http://sameersingh.org/

Sebastian Riedel

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors