Timezone: »
When finetuning a pretrained language model for natural language generation tasks, one is currently faced with a tradeoff. Lightweight finetuning (e.g., prefix- tuning, adapters), which freezes all or most of the parameters of the pretrained model, has been shown to achieve stronger out-of-distribution (OOD) performance than full finetuning, which tunes all of the parameters. However, lightweight finetuning can underperform full finetuning in-distribution (ID). In this work, we present methods to combine the benefits of full and lightweight finetuning, achieving strong performance both ID and OOD. First, we show that an ensemble of the lightweight and full finetuning models achieves the best of both worlds: performance matching the better of full and lightweight finetuning, both ID and OOD. Second, we show that we can achieve similar improvements using a single model instead of two with our proposed cocktail finetuning, which augments full finetuning via distillation from a lightweight model. Finally, we provide some explanatory theory in a multiclass logistic regression setting with a large number of classes, describing how distillation on ID data can transfer the OOD behavior of one model to another.
Author Information
John Hewitt (Stanford University)
Xiang Li (Stanford University)
Sang Michael Xie (Stanford University)
Benjamin Newman (Stanford University)
Percy Liang (Stanford University)

Percy Liang is an Assistant Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His research spans machine learning and natural language processing, with the goal of developing trustworthy agents that can communicate effectively with people and improve over time through interaction. Specific topics include question answering, dialogue, program induction, interactive learning, and reliable machine learning. His awards include the IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).
More from the Same Authors
-
2021 Spotlight: Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning »
Colin Wei · Sang Michael Xie · Tengyu Ma -
2021 : Simple Baselines Are Strong Performers for Differentially Private Natural Language Processing »
Xuechen (Chen) Li · Florian Tramer · Percy Liang · Tatsunori Hashimoto -
2021 : Calibrated Ensembles: A Simple Way to Mitigate ID-OOD Accuracy Tradeoffs »
Ananya Kumar · Aditi Raghunathan · Tengyu Ma · Percy Liang -
2021 : How Does Contrastive Pre-training Connect Disparate Domains? »
Kendrick Shen · Robert Jones · Ananya Kumar · Sang Michael Xie · Percy Liang -
2021 : Extending the WILDS Benchmark for Unsupervised Adaptation »
Shiori Sagawa · Pang Wei Koh · Tony Lee · Irena Gao · Sang Michael Xie · Kendrick Shen · Ananya Kumar · Weihua Hu · Michihiro Yasunaga · Henrik Marklund · Sara Beery · Ian Stavness · Jure Leskovec · Kate Saenko · Tatsunori Hashimoto · Sergey Levine · Chelsea Finn · Percy Liang -
2023 Poster: Learning to Compress Prompts with Gist Tokens »
Jesse Mu · Xiang Li · Noah Goodman -
2022 Panel: Panel 4A-3: QUARK: Controllable Text… & Diffusion-LM Improves Controllable… »
Xiang Li · Ximing Lu -
2022 Poster: Diffusion-LM Improves Controllable Text Generation »
Xiang Li · John Thickstun · Ishaan Gulrajani · Percy Liang · Tatsunori Hashimoto -
2021 : Simple Baselines Are Strong Performers for Differentially Private Natural Language Processing »
Xuechen (Chen) Li · Florian Tramer · Percy Liang · Tatsunori Hashimoto -
2021 Poster: Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning »
Colin Wei · Sang Michael Xie · Tengyu Ma -
2019 Poster: Unlabeled Data Improves Adversarial Robustness »
Yair Carmon · Aditi Raghunathan · Ludwig Schmidt · John Duchi · Percy Liang