Timezone: »

Poster
Continual learning: a feature extraction formalization, an efficient algorithm, and fundamental obstructions
Binghui Peng · Andrej Risteski

Tue Nov 29 09:00 AM -- 11:00 AM (PST) @ Hall J #333
Continual learning is an emerging paradigm in machine learning, wherein a model is exposed in an online fashion to data from multiple different distributions (i.e. environments), and is expected to adapt to the distribution change. Precisely, the goal is to perform well in the new environment, while simultaneously retaining the performance on the previous environments (i.e. avoid catastrophic forgetting'').While this setup has enjoyed a lot of attention in the applied community, there hasn't be theoretical work that even formalizes the desired guarantees. In this paper, we propose a framework for continual learning through the framework of feature extraction---namely, one in which features, as well as a classifier, are being trained with each environment. When the features are linear, we design an efficient gradient-based algorithm $\mathsf{DPGrad}$, that is guaranteed to perform well on the current environment, as well as avoid catastrophic forgetting. In the general case, when the features are non-linear, we show such an algorithm cannot exist, whether efficient or not.

#### Author Information

##### Andrej Risteski (CMU)

Assistant Professor in the ML department at CMU. Prior to that I was a Wiener Fellow at MIT, and prior to that finished my PhD at Princeton University.