Timezone: »

Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning
Massimo Caccia · Pau Rodriguez · Oleksiy Ostapenko · Fabrice Normandin · Min Lin · Lucas Page-Caccia · Issam Hadj Laradji · Irina Rish · Alexandre Lacoste · David Vázquez · Laurent Charlin

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #771

Continual learning agents experience a stream of (related) tasks. The main challenge is that the agent must not forget previous tasks and also adapt to novel tasks in the stream. We are interested in the intersection of two recent continual-learning scenarios. In meta-continual learning, the model is pre-trained using meta-learning to minimize catastrophic forgetting of previous tasks. In continual-meta learning, the aim is to train agents for faster remembering of previous tasks through adaptation. In their original formulations, both methods have limitations. We stand on their shoulders to propose a more general scenario, OSAKA, where an agent must quickly solve new (out-of-distribution) tasks, while also requiring fast remembering. We show that current continual learning, meta-learning, meta-continual learning, and continual-meta learning techniques fail in this new scenario. We propose Continual-MAML, an online extension of the popular MAML algorithm as a strong baseline for this scenario. We show in an empirical study that Continual-MAML is better suited to the new scenario than the aforementioned methodologies including standard continual learning and meta-learning approaches.

Author Information

Massimo Caccia (MILA)
Pau Rodriguez (Element AI)
Oleksiy Ostapenko (University of Montreal, MILA)
Fabrice Normandin (MILA)
Min Lin (MILA)
Lucas Page-Caccia (McGill University)
Issam Hadj Laradji (McGill + Element AI)
Irina Rish (Mila/UdeM)
Alexandre Lacoste (Element AI)
David Vázquez (Element AI)
Laurent Charlin (MILA / U.Montreal)

More from the Same Authors