Skip to yearly menu bar Skip to main content


Poster

Memory-Efficient Gradient Unrolling for Large-Scale Bi-level Optimization

Qianli Shen · Yezhen Wang · Zhouhao Yang · Xiang Li · Haonan Wang · Yang Zhang · Jonathan Scarlett · Zhanxing Zhu · Kenji Kawaguchi

West Ballroom A-D #6004
[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: Bi-level optimizaiton (BO) has become a fundamental mathematical framework for addressing hierarchical machine learning problems.As deep learning models continue to grow in size, the demand for scalable bi-level optimization has become increasingly critical.Traditional gradient-based bi-level optimizaiton algorithms, due to their inherent characteristics, are ill-suited to meet the demands of large-scale applications.In this paper, we introduce **F**orward **G**radient **U**nrolling with **F**orward **G**radient, abbreviated as **$($FG$)^2$U**, which achieves an unbiased stochastic approximation of the meta gradient for bi-level optimizaiton.$($FG$)^2$U circumvents the memory and approximation issues associated with classical bi-level optimizaiton approaches, and delivers significantly more accurate gradient estimates than existing large-scale bi-level optimizaiton approaches.Additionally, $($FG$)^2$U is inherently designed to support parallel computing, enabling it to effectively leverage large-scale distributed computing systems to achieve significant computational efficiency.In practice, $($FG$)^2$U and other methods can be strategically placed at different stages of the training process to achieve a more cost-effective two-phase paradigm.Further, $($FG$)^2$U is easy to implement within popular deep learning frameworks, and can be conveniently adapted to address more challenging zeroth-order bi-level optimizaiton scenarios.We provide a thorough convergence analysis and a comprehensive practical discussion for $($FG$)^2$U, complemented by extensive empirical evaluations, showcasing its superior performance in diverse large-scale bi-level optimizaiton tasks.

Live content is unavailable. Log in and register to view live content