Timezone: »

Learning Invariances using the Marginal Likelihood
Mark van der Wilk · Matthias Bauer · ST John · James Hensman

Wed Dec 05 02:00 PM -- 04:00 PM (PST) @ Room 210 #20

In many supervised learning tasks, learning what changes do not affect the predic-tion target is as crucial to generalisation as learning what does. Data augmentationis a common way to enforce a model to exhibit an invariance: training data is modi-fied according to an invariance designed by a human and added to the training data.We argue that invariances should be incorporated the model structure, and learnedusing themarginal likelihood, which can correctly reward the reduced complexityof invariant models. We incorporate invariances in a Gaussian process, due to goodmarginal likelihood approximations being available for these models. Our maincontribution is a derivation for a variational inference scheme for invariant Gaussianprocesses where the invariance is described by a probability distribution that canbe sampled from, much like how data augmentation is implemented in practice

Author Information

Mark van der Wilk (PROWLER.io)
Matthias Bauer (Max Planck Institute for Intelligent Systems)
ST John (PROWLER.io)
James Hensman (PROWLER.io)

More from the Same Authors