Skip to yearly menu bar Skip to main content


Poster

Learning to Learn By Self-Critique

Antreas Antoniou · Amos Storkey

East Exhibition Hall B + C #28

Keywords: [ Meta-Learning ] [ Few-Shot Learning ] [ Algorithms ]


Abstract:

In few-shot learning, a machine learning system is required to learn from a small set of labelled examples of a specific task, such that it can achieve strong generalization on new unlabelled examples of the same task. Given the limited availability of labelled examples in such tasks, we need to make use of all the information we can. For this reason we propose the use of transductive meta-learning for few shot settings to obtain state-of-the-art few-shot learning.

Usually a model learns task-specific information from a small training-set (the \emph{support-set}) and subsequently produces predictions on a small unlabelled validation set (\emph{target-set}). The target-set contains additional task-specific information which is not utilized by existing few-shot learning methods. This is a challenge requiring approaches beyond the current methods as at inference time, the target-set contains only input data-points, and so discriminative-based learning cannot be used.

In this paper, we propose a framework called \emph{Self-Critique and Adapt} or SCA. This approach learns to learn a label-free loss function, parameterized as a neural network, which leverages target-set information. A base-model learns on a support-set using existing methods (e.g. stochastic gradient descent combined with the cross-entropy loss), and then is updated for the incoming target-task using a new learned loss function (i.e. the meta-learned label-free loss). This unsupervised loss function is optimized such that the learnt model achieves higher generalization performance. Experiments demonstrate that SCA offers substantially higher and state-of-the-art generalization performance compared to baselines which only adapt on the support-set.

Live content is unavailable. Log in and register to view live content