Program Highlights »
Workshop
Fri Dec 7th 08:00 AM -- 06:30 PM @ Room 517 A
Continual Learning
Razvan Pascanu · Yee Teh · Marc Pickett · Mark Ring





Workshop Home Page

Continual learning (CL) is the ability of a model to learn continually from a stream of data, building on what was learnt previously, hence exhibiting positive transfer, as well as being able to remember previously seen tasks. CL is a fundamental step towards artificial intelligence, as it allows the agent to adapt to a continuously changing environment, a hallmark of natural intelligence. It also has implications for supervised or unsupervised learning. For example, when the dataset is not properly shuffled or there exists a drift in the input distribution, the model overfits the recently seen data, forgetting the rest -- phenomena referred to as catastrophic forgetting, which is part of CL and is something CL systems aim to address.

Continual learning is defined in practice through a series of desiderata. A non-complete lists includes:
* Online learning -- learning occurs at every moment, with no fixed tasks or data sets and no clear boundaries between tasks;
* Presence of transfer (forward/backward) -- the model should be able to transfer from previously seen data or tasks to new ones, as well as possibly new task should help improve performance on older ones;
* Resistance to catastrophic forgetting -- new learning does not destroy performance on previously seen data;
* Bounded system size -- the model capacity should be fixed, forcing the system use its capacity intelligently as well as gracefully forgetting information such to ensure maximising future reward;
* No direct access to previous experience -- while the model can remember a limited amount of experience, a continual learning algorithm should not have direct access to past tasks or be able to rewind the environment;

In the previous edition of the workshop the focus has been on defining a complete list of desiderata, of what a continual learning (CL) enabled system should be able to do. We believe that in this edition we should further constrain the discussion with a focus on how to evaluate CL and how it relates to other existing topics (e.g. life-long learning, transfer learning, meta-learning) and how ideas from these topics could be useful for continual learning.

Different aspects of continual learning are in opposition of each other (e.g. fixed model capacity and not-forgetting), which also raises the question of how to evaluate continual learning systems. One one hand, what are the right trade-offs between these different opposing forces? How do we compare existing algorithms given these different dimensions along which we should evaluate them (e.g. forgetting, positive transfer)? What are the right metrics we should report? On the other hand, optimal or meaningful trade-offs will be tightly defined by the data or at least type of tasks we use to test the algorithms. One prevalent task used by many recent papers is PermutedMNIST. But as MNIST is not a reliable dataset for classification, so PermutedMNIST might be extremely misleading for continual learning. What would be the right benchmarks, datasets or tasks for fruitfully exploiting this topic?

Finally, we will also encourage presentation of both novel approaches to CL and implemented systems, which will help concretize the discussion of what CL is and how to evaluate CL systems.

08:30 AM Introduction of the workshop (Talk)
Razvan Pascanu, Yee Teh, Mark Ring, Marc Pickett
09:15 AM Spotlight #1 (Spotlight)
09:30 AM Spotlight #2 (Spotlight)
09:45 AM Spotlight #3 (Spotlight)
10:00 AM Invited Speaker #1 Chelsea Finn (Talk)
Chelsea Finn
11:00 AM Invited Speaker #2 Raia Hadsell (Talk)
Raia Hadsell
11:30 AM Invited Speaker #3 Marc'Aurelio Ranzato (Talk)
Marc'Aurelio Ranzato
12:00 PM Lunch & Posters (Break & Posters)
Haytham Fayek, German Parisi, Brian Xu, Pramod Kaushik Mudrakarta, Sophie Cerf, Sarah Wassermann, Davit Soselia, Rahaf Aljundi, Mohamed Elhoseiny, Frantzeska Lavda, Kevin Liang, Arslan Chaudhry, Sanmit Narvekar, Vincenzo Lomonaco, Wes Chung, Michael Chang, Ying Zhao, Zsolt Kira, Pouya Bashivan, Banafsheh Rafiee, Oleksiy Ostapenko, Andrew Jones, Christos Kaplanis, Sinan Kalkan, Dan Teng, Owen He, Vincent Liu, Somjit Nath, Sung-Soo Ahn, Ting Chen, Shenyang Huang, Yash Chandak, Nathan Sprague, Martin Schrimpf, Tony Kendall, Jonathan Schwarz, Michael Li, Yunshu Du, Yen-Chang Hsu, Samira Abnar, Bo Wang
02:00 PM Invited Speaker #4 Juergen Schmidhuber (Talk)
Jürgen Schmidhuber
02:30 PM Invited Speaker #5 Yarin Gal (Talk)
Yarin Gal
03:00 PM Coffee Break & Posters (Break & Posters)
03:30 PM Spotlight #4 (Spotlight)
03:45 PM Spotlight #5 (Spotlight)
04:00 PM Spotlight #6 (Spotlight)
04:15 PM Overview of Darpa's Lifelong learning program (Hava Siegelmann) (Spotlight)
Hava Siegelmann
04:30 PM Invited Speaker #6 Martha White (Talk)
Martha White
05:00 PM Panel Discussion