We propose a novel regularization-based continual learning method, dubbed as Adaptive Group Sparsity based Continual Learning (AGS-CL), using two group sparsity-based penalties. Our method selectively employs the two penalties when learning each neural network node based on its the importance, which is adaptively updated after learning each task. By utilizing the proximal gradient descent method, the exact sparsity and freezing of the model is guaranteed during the learning process, and thus, the learner explicitly controls the model capacity. Furthermore, as a critical detail, we re-initialize the weights associated with unimportant nodes after learning each task in order to facilitate efficient learning and prevent the negative transfer. Throughout the extensive experimental results, we show that our AGS-CL uses orders of magnitude less memory space for storing the regularization parameters, and it significantly outperforms several state-of-the-art baselines on representative benchmarks for both supervised and reinforcement learning.
Sangwon Jung (SKKU)
Hongjoon Ahn (Sunkyunkwan University)
Sungmin Cha (Sungkyunkwan University)
Taesup Moon (Sungkyunkwan University (SKKU))
Taesup Moon is currently an associate professor at Sungkyunkwan University (SKKU), Korea. Prior to joining SKKU in 2017, he was an assistant professor at DGIST from 2015 to 2017, a research staff member at Samsung Advanced Institute of Technology (SAIT) from 2013 to 2015, a postdoctoral researcher at UC Berkeley, Statistics from 2012 to 2013, and a research scientist at Yahoo! Labs from 2008 to 2012. He got his Ph.D. and MS degrees in Electrical Engineering from Stanford University, CA USA in 2008 and 2004, respectively, and his BS degree in Electrical Engineering from Seoul National University, Korea in 2002. His research interests are in deep learning, statistical machine learning, data science, signal processing, and information theory.
More from the Same Authors
2021 Poster: SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning »
Sungmin Cha · Beomyoung Kim · YoungJoon Yoo · Taesup Moon
2019 Poster: Uncertainty-based Continual Learning with Adaptive Regularization »
Hongjoon Ahn · Sungmin Cha · Donggyu Lee · Taesup Moon
2019 Poster: Fooling Neural Network Interpretations via Adversarial Model Manipulation »
Juyeon Heo · Sunghwan Joo · Taesup Moon
2016 Poster: Neural Universal Discrete Denoiser »
Taesup Moon · Seonwoo Min · Byunghan Lee · Sungroh Yoon