Timezone: »
In few-shot learning, a machine learning system is required to learn from a small set of labelled examples of a specific task, such that it can achieve strong generalization on new unlabelled examples of the same task. Given the limited availability of labelled examples in such tasks, we need to make use of all the information we can. For this reason we propose the use of transductive meta-learning for few shot settings to obtain state-of-the-art few-shot learning.
Usually a model learns task-specific information from a small training-set (the \emph{support-set}) and subsequently produces predictions on a small unlabelled validation set (\emph{target-set}). The target-set contains additional task-specific information which is not utilized by existing few-shot learning methods. This is a challenge requiring approaches beyond the current methods as at inference time, the target-set contains only input data-points, and so discriminative-based learning cannot be used.
In this paper, we propose a framework called \emph{Self-Critique and Adapt} or SCA. This approach learns to learn a label-free loss function, parameterized as a neural network, which leverages target-set information. A base-model learns on a support-set using existing methods (e.g. stochastic gradient descent combined with the cross-entropy loss), and then is updated for the incoming target-task using a new learned loss function (i.e. the meta-learned label-free loss). This unsupervised loss function is optimized such that the learnt model achieves higher generalization performance. Experiments demonstrate that SCA offers substantially higher and state-of-the-art generalization performance compared to baselines which only adapt on the support-set.
Author Information
Antreas Antoniou (University of Edinburgh)
Amos Storkey (University of Edinburgh)
More from the Same Authors
-
2021 : Hamiltonian prior to Disentangle Content and Motion in Image Sequences »
Asif Khan · Amos Storkey -
2022 : Parity in predictive performance is neither necessary nor sufficient for fairness »
Justin Engelmann · Miguel Bernabeu · Amos Storkey -
2022 : Deep Class-Conditional Gaussians for Continual Learning »
Thomas Lee · Amos Storkey -
2022 Poster: Hamiltonian Latent Operators for content and motion disentanglement in image sequences »
Asif Khan · Amos Storkey -
2021 Poster: Gradient-based Hyperparameter Optimization Over Long Horizons »
Paul Micaelli · Amos Storkey -
2020 Poster: Self-Supervised Relational Reasoning for Representation Learning »
Massimiliano Patacchiola · Amos Storkey -
2020 Spotlight: Self-Supervised Relational Reasoning for Representation Learning »
Massimiliano Patacchiola · Amos Storkey -
2020 Poster: Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels »
Massimiliano Patacchiola · Jack Turner · Elliot Crowley · Michael O'Boyle · Amos Storkey -
2020 Spotlight: Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels »
Massimiliano Patacchiola · Jack Turner · Elliot Crowley · Michael O'Boyle · Amos Storkey -
2019 Poster: Zero-shot Knowledge Transfer via Adversarial Belief Matching »
Paul Micaelli · Amos Storkey -
2019 Spotlight: Zero-shot Knowledge Transfer via Adversarial Belief Matching »
Paul Micaelli · Amos Storkey -
2018 Poster: Moonshine: Distilling with Cheap Convolutions »
Elliot Crowley · Gavia Gray · Amos Storkey -
2015 Poster: Covariance-Controlled Adaptive Langevin Thermostat for Large-Scale Bayesian Sampling »
Xiaocheng Shang · Zhanxing Zhu · Benedict Leimkuhler · Amos Storkey -
2014 Workshop: NIPS Workshop on Transactional Machine Learning and E-Commerce »
David Parkes · David H Wolpert · Jennifer Wortman Vaughan · Jacob D Abernethy · Amos Storkey · Mark Reid · Ping Jin · Nihar Bhadresh Shah · Mehryar Mohri · Luis E Ortiz · Robin Hanson · Aaron Roth · Satyen Kale · Sebastien Lahaie -
2012 Poster: Continuous Relaxations for Discrete Hamiltonian Monte Carlo »
Zoubin Ghahramani · Yichuan Zhang · Charles Sutton · Amos Storkey -
2012 Spotlight: Continuous Relaxations for Discrete Hamiltonian Monte Carlo »
Zoubin Ghahramani · Yichuan Zhang · Charles Sutton · Amos Storkey -
2012 Poster: The Coloured Noise Expansion and Parameter Estimation of Diffusion Processes »
Simon Lyons · Amos Storkey · Simo Sarkka -
2011 Poster: Neuronal Adaptation for Sampling-Based Probabilistic Inference in Perceptual Bistability »
David Reichert · Peggy Series · Amos Storkey -
2011 Spotlight: Neuronal Adaptation for Sampling-Based Probabilistic Inference in Perceptual Bistability »
David Reichert · Peggy Series · Amos Storkey -
2010 Poster: Hallucinations in Charles Bonnet Syndrome Induced by Homeostasis: a Deep Boltzmann Machine Model »
David Reichert · Peggy Series · Amos Storkey -
2010 Poster: Sparse Instrumental Variables (SPIV) for Genome-Wide Studies »
Felix V Agakov · Paul McKeigue · Jon Krohn · Amos Storkey -
2007 Poster: Continuous Time Particle Filtering for fMRI »
Lawrence Murray · Amos Storkey -
2007 Poster: Modelling motion primitives and their timing in biologically executed movements »
Ben H Williams · Marc Toussaint · Amos Storkey -
2006 Poster: Learning Structural Equation Models for fMRI »
Amos Storkey · Enrico Simonotto · Heather Whalley · Stephen Lawrie · Lawrence Murray · David McGonigle -
2006 Poster: Mixture Regression for Covariate Shift »
Amos Storkey · Masashi Sugiyama