Skip to yearly menu bar Skip to main content


Tutorial

Lifelong Learning Machines

Tyler Hayes · Dhireesha Kudithipudi · Gido van de Ven

Moderator : Jessica Schrouff

Virtual

Abstract:

Incrementally learning new information from a non-stationary stream of data, referred to as lifelong learning, is a key feature of natural intelligence, but an open challenge for deep learning. For example, when artificial neural networks are trained on samples from a new task or data distribution, they tend to rapidly lose previously acquired capabilities, a phenomenon referred to as catastrophic forgetting. In stark contrast, humans and other animals are able to incrementally learn new skills without compromising those that were learned before. Numerous deep learning methods for lifelong learning have been proposed in recent years, but yet a substantial gap remains between the lifelong learning abilities of artificial and biological neural networks.

In this tutorial, we start by asking what key capabilities a successful lifelong learning machine should have. We then review the current literature on lifelong learning, and we ask how far we have come. We do this in two parts. First, we review the popular benchmarks and setups currently used in the literature, and we critically assess to what extent they measure progress relevant for lifelong learning applications in the real world. Second, we review the strategies for lifelong learning that have been explored so far, and we ask to what extent these strategies could support lifelong learning. In particular, we ask what unexplored biological mechanisms there are still out there. We end with a panel discussion in which we probe experts in the field for their opinions on the biggest open questions for future lifelong learning research.

Chat is not available.
Schedule