Timezone: »

 
Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Yoonho Lee · Annie Chen · Fahim Tajwar · Ananya Kumar · Huaxiu Yao · Percy Liang · Chelsea Finn
Event URL: https://openreview.net/forum?id=r8juz2t749J »

A common approach to transfer learning under distribution shift is to fine-tune the last few layers of a pre-trained model, preserving learned features while also adapting to the new task. This paper shows that in such settings, selectively fine-tuning a subset of layers (which we term surgical fine-tuning) matches or outperforms commonly used fine-tuning approaches. Moreover, the type of distribution shift influences which subset is more effective to tune: for example, for image corruptions, fine-tuning only the first few layers works best. We validate our findings systematically across seven real-world data tasks spanning three types of distribution shifts. Theoretically, we prove that for two-layer neural networks in an idealized setting, first-layer tuning can outperform fine-tuning all layers. Intuitively, fine-tuning more parameters on a small target dataset can cause information learned during pre-training to be forgotten, and the relevant information depends on the type of shift.

Author Information

Yoonho Lee (Stanford University)
Annie Chen (Stanford University)
Fahim Tajwar (Stanford University)
Ananya Kumar (Stanford University)
Huaxiu Yao (Stanford University)
Percy Liang (Stanford University)
Chelsea Finn (Stanford)

More from the Same Authors