Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2022 Workshop on Meta-Learning

Contextual Squeeze-and-Excitation

Massimiliano Patacchiola · John Bronskill · Aliaksandra Shysheya · Katja Hofmann · Sebastian Nowozin · Richard Turner


Abstract:

Several applications require effective knowledge transfer across tasks in the low-data regime. For instance in personalization a pretrained system is adapted by learning on small amounts of labeled data belonging to a specific user (context). This setting requires high accuracy under low computational complexity, meaning low memory footprint in terms of parameters storage and adaptation cost. Meta-learning methods based on Feature-wise Linear Modulation generators (FiLM) satisfy these constraints as they can adapt a backbone without expensive fine-tuning. However, there has been limited research on viable alternatives to FiLM generators. In this paper we focus on this area of research and propose a new adaptive block called Contextual Squeeze-and-Excitation (CaSE). CaSE is more efficient than FiLM generators for a variety of reasons: it does not require a separate set encoder, has fewer learnable parameters, and only uses a scale vector (no shift) to modulate activations. We empirically show that CaSE is able to outperform FiLM generators in terms of parameter efficiency (a 75% reduction in the number of adaptation parameters) and classification accuracy (a 1.5% average improvement on the 26 datasets of the VTAB+MD benchmark).

Chat is not available.