Timezone: »
Deep Neural Networks have shown great promise on a variety of downstream applications; but their ability to adapt and generalize to new data and tasks remains a challenging problem. However, the ability to perform few-shot adaptation to novel tasks is important for the scalability and deployment of machine learn-ing models. It is therefore crucial to understand what makes for good, transferable features in deep networks that best allow for such adaptation. In this paper, we shed light on this by showing that features that are most transferable have high uniformity in the embedding space and propose a uniformity regularization scheme that encourages better transfer and feature reuse for few-shot learning. We evaluate our regularization on few-shot Meta-Learning benchmarks and show that uniformity regularization consistently offers benefits over baseline methods while also being able to achieve state-of-the-art on the Meta-Dataset
Author Information
Samarth Sinha (University of Toronto, Vector Institute)
More from the Same Authors
-
2021 Poster: Consistency Regularization for Variational Auto-Encoders »
Samarth Sinha · Adji Bousso Dieng -
2021 Poster: Characterizing Generalization under Out-Of-Distribution Shifts in Deep Metric Learning »
Timo Milbich · Karsten Roth · Samarth Sinha · Ludwig Schmidt · Marzyeh Ghassemi · Bjorn Ommer -
2020 Poster: Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad Samples »
Samarth Sinha · Zhengli Zhao · Anirudh Goyal · Colin A Raffel · Augustus Odena -
2020 Poster: Curriculum By Smoothing »
Samarth Sinha · Animesh Garg · Hugo Larochelle -
2020 Spotlight: Curriculum By Smoothing »
Samarth Sinha · Animesh Garg · Hugo Larochelle