Skip to yearly menu bar Skip to main content


Poster

Self-Supervised Generalisation with Meta Auxiliary Learning

Shikun Liu · Andrew Davison · Edward Johns

East Exhibition Hall B + C #49

Keywords: [ Multitask and Transfer Learning ] [ Meta-Learning ] [ Algorithms ]


Abstract:

Learning with auxiliary tasks can improve the ability of a primary task to generalise. However, this comes at the cost of manually labelling auxiliary data. We propose a new method which automatically learns appropriate labels for an auxiliary task, such that any supervised learning task can be improved without requiring access to any further data. The approach is to train two neural networks: a label-generation network to predict the auxiliary labels, and a multi-task network to train the primary task alongside the auxiliary task. The loss for the label-generation network incorporates the loss of the multi-task network, and so this interaction between the two networks can be seen as a form of meta learning with a double gradient. We show that our proposed method, Meta AuXiliary Learning (MAXL), outperforms single-task learning on 7 image datasets, without requiring any additional data. We also show that MAXL outperforms several other baselines for generating auxiliary labels, and is even competitive when compared with human-defined auxiliary labels. The self-supervised nature of our method leads to a promising new direction towards automated generalisation. Source code can be found at \url{https://github.com/lorenmt/maxl}.

Live content is unavailable. Log in and register to view live content