Timezone: »

 
Poster
Look-ahead Meta Learning for Continual Learning
Gunshi Gupta · Karmesh Yadav · Liam Paull

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #767

The continual learning problem involves training models with limited capacity to perform well on a set of an unknown number of sequentially arriving tasks. While meta-learning shows great potential for reducing interference between old and new tasks, the current training procedures tend to be either slow or offline, and sensitive to many hyper-parameters. In this work, we propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-learning algorithm for online-continual learning, aided by a small episodic memory. By incorporating the modulation of per-parameter learning rates in our meta-learning update, our approach also allows us to draw connections to and exploit prior work on hypergradients and meta-descent. This provides a more flexible and efficient way to mitigate catastrophic forgetting compared to conventional prior-based methods. La-MAML achieves performance superior to other replay-based, prior-based and meta-learning based approaches for continual learning on real-world visual classification benchmarks.

Author Information

Gunshi Gupta (University of montreal)
Karmesh Yadav (Carnegie)
Liam Paull (Université de Montréal)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors