Improved Schemes for Episodic Memory-based Lifelong Learning
Yunhui Guo, Mingrui Liu, Tianbao Yang, Tajana Rosing
Spotlight presentation: Orals & Spotlights Track 23: Graph/Meta Learning/Software
on 2020-12-09T19:00:00-08:00 - 2020-12-09T19:10:00-08:00
on 2020-12-09T19:00:00-08:00 - 2020-12-09T19:10:00-08:00
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: Current deep neural networks can achieve remarkable performance on a single task. However, when the deep neural network is continually trained on a sequence of tasks, it seems to gradually forget the previous learned knowledge. This phenomenon is referred to as catastrophic forgetting and motivates the field called lifelong learning. Recently, episodic memory based approaches such as GEM and A-GEM have shown remarkable performance. In this paper, we provide the first unified view of episodic memory based approaches from an optimization's perspective. This view leads to two improved schemes for episodic memory based lifelong learning, called MEGA-\rom{1} and MEGA-\rom{2}. MEGA-\rom{1} and MEGA-\rom{2} modulate the balance between old tasks and the new task by integrating the current gradient with the gradient computed on the episodic memory. Notably, we show that GEM and A-GEM are degenerate cases of MEGA-\rom{1} and MEGA-\rom{2} which consistently put the same emphasis on the current task, regardless of how the loss changes over time. Our proposed schemes address this issue by using novel loss-balancing updating rules, which drastically improve the performance over GEM and A-GEM. Extensive experimental results show that the proposed schemes significantly advance the state-of-the-art on four commonly used lifelong learning benchmarks, reducing the error by up to 18%.