Modular Meta-Learning with Shrinkage
Yutian Chen, Abe Friesen, Feryal Behbahani, Arnaud Doucet, David Budden, Matthew Hoffman, Nando de Freitas
Spotlight presentation: Orals & Spotlights Track 23: Graph/Meta Learning/Software
on 2020-12-09T19:50:00-08:00 - 2020-12-09T20:00:00-08:00
on 2020-12-09T19:50:00-08:00 - 2020-12-09T20:00:00-08:00
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: Many real-world problems, including multi-speaker text-to-speech synthesis, can greatly benefit from the ability to meta-learn large models with only a few task- specific components. Updating only these task-specific modules then allows the model to be adapted to low-data tasks for as many steps as necessary without risking overfitting. Unfortunately, existing meta-learning methods either do not scale to long adaptation or else rely on handcrafted task-specific architectures. Here, we propose a meta-learning approach that obviates the need for this often sub-optimal hand-selection. In particular, we develop general techniques based on Bayesian shrinkage to automatically discover and learn both task-specific and general reusable modules. Empirically, we demonstrate that our method discovers a small set of meaningful task-specific modules and outperforms existing meta- learning approaches in domains like few-shot text-to-speech that have little task data and long adaptation horizons. We also show that existing meta-learning methods including MAML, iMAML, and Reptile emerge as special cases of our method.