Timezone: »

Meta-Learning Representations for Continual Learning
Khurram Javed · Martha White

Tue Dec 10 05:30 PM -- 07:30 PM (PST) @ East Exhibition Hall B + C #46

A continual learning agent should be able to build on top of existing knowledge to learn on new data quickly while minimizing forgetting. Current intelligent systems based on neural network function approximators arguably do the opposite—they are highly prone to forgetting and rarely trained to facilitate future learning. One reason for this poor behavior is that they learn from a representation that is not explicitly trained for these two goals. In this paper, we propose OML, an objective that directly minimizes catastrophic interference by learning representations that accelerate future learning and are robust to forgetting under online updates in continual learning. We show that it is possible to learn naturally sparse representations that are more effective for online updating. Moreover, our algorithm is complementary to existing continual learning strategies, such as MER and GEM. Finally, we demonstrate that a basic online updating strategy on representations learned by OML is competitive with rehearsal based methods for continual learning.

Author Information

Khurram Javed (University of Alberta)

## Areas of Interest I believe in doing reproducible science, and working at the intersection of Meta-Learning, Representation Learning, and Continual Learning.

Martha White (University of Alberta)

More from the Same Authors