Timezone: »
Coreset is a small set that provides a data summary for a large dataset, such that training solely on the small set achieves competitive performance compared with a large dataset. In rehearsal-based continual learning, the coreset is typically used in the memory replay buffer to stand for representative samples in previous tasks, and the coreset selection procedure is typically formulated as a bilevel problem. However, the typical bilevel formulation for coreset selection explicitly performs optimization over discrete decision variables with greedy search, which is computationally expensive. Several works consider other formulations to address this issue, but they ignore the nested nature of bilevel optimization problems and may not solve the bilevel coreset selection problem accurately. To address these issues, we propose a new bilevel formulation, where the inner problem tries to find a model which minimizes the expected training error sampled from a given probability distribution, and the outer problem aims to learn the probability distribution with approximately $K$ (coreset size) nonzero entries such that learned model in the inner problem minimizes the training error over the whole data. To ensure the learned probability has approximately $K$ nonzero entries, we introduce a novel regularizer based on the smoothed top-$K$ loss in the upper problem. We design a new optimization algorithm that provably converges to the $\epsilon$-stationary point with $O(1/\epsilon^4)$ computational complexity. We conduct extensive experiments in various settings in continual learning, including balanced data, imbalanced data, and label noise, to show that our proposed formulation and new algorithm significantly outperform competitive baselines. From bilevel optimization point of view, our algorithm significantly improves the vanilla greedy coreset selection method in terms of running time on continual learning benchmark datasets. The code is available at https://github.com/MingruiLiu-ML-Lab/Bilevel-Coreset-Selection-via-Regularization.
Author Information
Jie Hao (George Mason University)
Kaiyi Ji (University at Buffalo)
Kaiyi Ji is now an assistant professor at the Department of Computer Science and Engineering of the University at Buffalo. He was a postdoctoral research fellow at the Electrical Engineering and Computer Science Department of the University of Michigan, Ann Arbor, in 2022, working with Prof. Lei Ying. He received his Ph.D. degree from the Electrical and Computer Engineering Department of The Ohio State University in December, 2021, advised by Prof. Yingbin Liang. He was a visiting student research collaborator at the department of Electrical Engineering, Princeton University working with Prof. H. Vincent Poor. Previously he obtained his B.S. degree from University of Science and Technology of China in 2016.
Mingrui Liu (George Mason University)
More from the Same Authors
-
2021 Spotlight: Provably Faster Algorithms for Bilevel Optimization »
Junjie Yang · Kaiyi Ji · Yingbin Liang -
2023 Poster: Global Convergence Analysis of Local SGD for Two-layer Neural Network without Overparameterization »
Yajie Bao · Amarda Shehu · Mingrui Liu -
2023 Poster: SimFBO: Towards Simple, Flexible and Communication-efficient Federated Bilevel Learning »
Yifan Yang · Peiyao Xiao · Kaiyi Ji -
2023 Poster: Federated Learning with Client Subsampling, Data Heterogeneity, and Unbounded Smoothness: A New Algorithm and Lower Bounds »
Michael Crawshaw · Yajie Bao · Mingrui Liu -
2023 Poster: Achieving $\mathcal{O}(\epsilon^{-1.5})$ Complexity in Hessian/Jacobian-free Stochastic Bilevel Optimization »
Yifan Yang · Peiyao Xiao · Kaiyi Ji -
2023 Poster: Direction-oriented Multi-objective Learning: Simple and Provable Stochastic Algorithms »
Peiyao Xiao · Hao Ban · Kaiyi Ji -
2023 Poster: Non-Convex Bilevel Optimization with Time-Varying Objective Functions »
Sen Lin · Daouda Sow · Kaiyi Ji · Yingbin Liang · Ness Shroff -
2022 Spotlight: A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks »
Mingrui Liu · Zhenxun Zhuang · Yunwen Lei · Chunyang Liao -
2022 Spotlight: Will Bilevel Optimizers Benefit from Loops »
Kaiyi Ji · Mingrui Liu · Yingbin Liang · Lei Ying -
2022 Poster: A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks »
Mingrui Liu · Zhenxun Zhuang · Yunwen Lei · Chunyang Liao -
2022 Poster: Robustness to Unbounded Smoothness of Generalized SignSGD »
Michael Crawshaw · Mingrui Liu · Francesco Orabona · Wei Zhang · Zhenxun Zhuang -
2022 Poster: On the Convergence Theory for Hessian-Free Bilevel Algorithms »
Daouda Sow · Kaiyi Ji · Yingbin Liang -
2022 Poster: Will Bilevel Optimizers Benefit from Loops »
Kaiyi Ji · Mingrui Liu · Yingbin Liang · Lei Ying -
2021 Poster: Generalization Guarantee of SGD for Pairwise Learning »
Yunwen Lei · Mingrui Liu · Yiming Ying -
2021 Poster: Provably Faster Algorithms for Bilevel Optimization »
Junjie Yang · Kaiyi Ji · Yingbin Liang -
2020 Poster: Convergence of Meta-Learning with Task-Specific Adaptation over Partial Parameters »
Kaiyi Ji · Jason Lee · Yingbin Liang · H. Vincent Poor -
2020 Poster: Improved Schemes for Episodic Memory-based Lifelong Learning »
Yunhui Guo · Mingrui Liu · Tianbao Yang · Tajana S Rosing -
2020 Spotlight: Improved Schemes for Episodic Memory-based Lifelong Learning »
Yunhui Guo · Mingrui Liu · Tianbao Yang · Tajana S Rosing -
2020 Poster: A Decentralized Parallel Algorithm for Training Generative Adversarial Nets »
Mingrui Liu · Wei Zhang · Youssef Mroueh · Xiaodong Cui · Jarret Ross · Tianbao Yang · Payel Das -
2019 Poster: SpiderBoost and Momentum: Faster Variance Reduction Algorithms »
Zhe Wang · Kaiyi Ji · Yi Zhou · Yingbin Liang · Vahid Tarokh -
2018 Poster: Minimax Estimation of Neural Net Distance »
Kaiyi Ji · Yingbin Liang