Timezone: »
The backpropagation networks are notably susceptible to catastrophic forgetting, where networks tend to forget previously learned skills upon learning new ones. To address such the 'sensitivity-stability' dilemma, most previous efforts have been contributed to minimizing the empirical risk with different parameter regularization terms and episodic memory, but rarely exploring the usages of the weight loss landscape. In this paper, we investigate the relationship between the weight loss landscape and sensitivity-stability in the continual learning scenario, based on which, we propose a novel method, Flattening Sharpness for Dynamic Gradient Projection Memory (FS-DGPM). In particular, we introduce a soft weight to represent the importance of each basis representing past tasks in GPM, which can be adaptively learned during the learning process, so that less important bases can be dynamically released to improve the sensitivity of new skill learning. We further introduce Flattening Sharpness (FS) to reduce the generalization gap by explicitly regulating the flatness of the weight loss landscape of all seen tasks. As demonstrated empirically, our proposed method consistently outperforms baselines with the superior ability to learn new skills while alleviating forgetting effectively.
Author Information
Danruo DENG (The Chinese University of Hong Kong)
Guangyong Chen (SIAT, CAS)
Jianye Hao (Tianjin University)
Qiong Wang (Shenzhen institue of advanced technology,Chinese academy of sciences)
Pheng-Ann Heng (The Chinese University of Hong Kong)
More from the Same Authors
-
2021 : OVD-Explorer: A General Information-theoretic Exploration Approach for Reinforcement Learning »
Jinyi Liu · Zhi Wang · YAN ZHENG · Jianye Hao · Junjie Ye · Chenjia Bai · Pengyi Li -
2021 : HyAR: Addressing Discrete-Continuous Action Reinforcement Learning via Hybrid Action Representation »
Boyan Li · Hongyao Tang · YAN ZHENG · Jianye Hao · Pengyi Li · Zhaopeng Meng · LI Wang -
2021 : PMIC: Improving Multi-Agent Reinforcement Learning with Progressive Mutual Information Collaboration »
Pengyi Li · Hongyao Tang · Tianpei Yang · Xiaotian Hao · Sang Tong · YAN ZHENG · Jianye Hao · Matthew Taylor · Jinyi Liu -
2022 Poster: Transformer-based Working Memory for Multiagent Reinforcement Learning with Action Parsing »
Yaodong Yang · Guangyong Chen · Weixun Wang · Xiaotian Hao · Jianye Hao · Pheng-Ann Heng -
2021 : HyAR: Addressing Discrete-Continuous Action Reinforcement Learning via Hybrid Action Representation Q&A »
Boyan Li · Hongyao Tang · YAN ZHENG · Jianye Hao · Pengyi Li · Zhaopeng Meng · LI Wang -
2021 : HyAR: Addressing Discrete-Continuous Action Reinforcement Learning via Hybrid Action Representation »
Boyan Li · Hongyao Tang · YAN ZHENG · Jianye Hao · Pengyi Li · Zhaopeng Meng · LI Wang -
2021 Poster: Model-Based Reinforcement Learning via Imagination with Derived Memory »
Yao Mu · Yuzheng Zhuang · Bin Wang · Guangxiang Zhu · Wulong Liu · Jianyu Chen · Ping Luo · Shengbo Li · Chongjie Zhang · Jianye Hao -
2021 Poster: Adaptive Online Packing-guided Search for POMDPs »
Chenyang Wu · Guoyu Yang · Zongzhang Zhang · Yang Yu · Dong Li · Wulong Liu · Jianye Hao -
2021 Poster: A Hierarchical Reinforcement Learning Based Optimization Framework for Large-scale Dynamic Pickup and Delivery Problems »
Yi Ma · Xiaotian Hao · Jianye Hao · Jiawen Lu · Xing Liu · Tong Xialiang · Mingxuan Yuan · Zhigang Li · Jie Tang · Zhaopeng Meng -
2021 Poster: An Efficient Transfer Learning Framework for Multiagent Reinforcement Learning »
Tianpei Yang · Weixun Wang · Hongyao Tang · Jianye Hao · Zhaopeng Meng · Hangyu Mao · Dong Li · Wulong Liu · Yingfeng Chen · Yujing Hu · Changjie Fan · Chengwei Zhang -
2021 Poster: Dynamic Bottleneck for Robust Self-Supervised Exploration »
Chenjia Bai · Lingxiao Wang · Lei Han · Animesh Garg · Jianye Hao · Peng Liu · Zhaoran Wang