Skip to yearly menu bar Skip to main content


Poster

Make Continual Learning Stronger via C-Flat

Ang Bian · Wei Li · Hangjie Yuan · yu chengrong · Mang Wang · Zixiang Zhao · Aojun Lu · Pengliang Ji · Tao Feng

East Exhibit Hall A-C #4706
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

How to balance the learning ’sensitivity-stability’ upon new task training and memory preserving is critical in CL to resolve catastrophic forgetting. Improving model generalization ability within each learning phase is one solution to help CL learning overcome the gap in the joint knowledge space. Zeroth-order loss landscape sharpness-aware minimization is a strong training regime improving model generalization in transfer learning compared with optimizer like SGD. It has also been introduced into CL to improve memory representation or learning efficiency. However, zeroth-order sharpness alone could favors sharper over flatter minima in certain scenarios, leading to a rather sensitive minima rather than a global optima. To further enhance learning stability, we propose a Continual Flatness (C-Flat) method featuring a flatter loss landscape tailored for CL. C-Flat could be easily called with only one line of code and is plug-and-play to any CL methods. A general framework of C-Flat applied to all CL categories and a thorough comparison with loss minima optimizer and flat minima based CL approaches is presented in this paper, showing that our method can boost CL performance in almost all cases. Code will be publicly available upon publication.

Live content is unavailable. Log in and register to view live content