Timezone: »

 
Spotlight
Reconciling meta-learning and continual learning with online mixtures of tasks
Ghassen Jerfel · Erin Grant · Tom Griffiths · Katherine Heller

Wed Dec 11 04:20 PM -- 04:25 PM (PST) @ West Ballroom A + B

Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task. This approach encounters difficulty when transfer is not advantageous, for instance, when tasks are considerably dissimilar or change over time. We use the connection between gradient-based meta-learning and hierarchical Bayes to propose a Dirichlet process mixture of hierarchical Bayesian models over the parameters of an arbitrary parametric model such as a neural network. In contrast to consolidating inductive biases into a single set of hyperparameters, our approach of task-dependent hyperparameter selection better handles latent distribution shift, as demonstrated on a set of evolving, image-based, few-shot learning benchmarks.

Author Information

Ghassen Jerfel (Duke University)
Erin Grant (UC Berkeley)
Tom Griffiths (Princeton University)
Katherine Heller (Google)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors