Timezone: »
Despite the established convergence theory of Optimistic Gradient Descent Ascent (OGDA) and Extragradient (EG) methods for the convex-concave minimax problems, little is known about the theoretical guarantees of these methods in nonconvex settings. To bridge this gap, for the first time, this paper establishes the convergence of OGDA and EG methods under the nonconvex-strongly-concave (NC-SC) and nonconvex-concave (NC-C) settings by providing a unified analysis through the lens of single-call extra-gradient methods. We further establish lower bounds on the convergence of GDA/OGDA/EG, shedding light on the tightness of our analysis. We also conduct experiments supporting our theoretical results. We believe our results will advance the theoretical understanding of OGDA and EG methods for solving complicated nonconvex minimax real-world problems, e.g., Generative Adversarial Networks (GANs) or robust neural networks training.
Author Information
Pouria Mahdavinia (Penn State University)
Yuyang Deng (Penn State)
Haochuan Li (MIT)
Mehrdad Mahdavi (Pennsylvania State University)
Mehrdad Mahdavi is an Assistant Professor of Computer Science & Engineering at Pennsylvania State University. He runs the Machine Learning and Optimization Lab, where they work on fundamental problems in computational and theoretical machine learning.
More from the Same Authors
-
2023 Poster: Convergence of Adam under Relaxed Assumptions »
Haochuan Li · Ali Jadbabaie · Alexander Rakhlin -
2023 Poster: Understanding Deep Gradient Leakage via Inversion Influence Functions »
Haobo Zhang · Junyuan Hong · Yuyang Deng · Mehrdad Mahdavi · Jiayu Zhou -
2023 Poster: Mixture Weight Estimation and Model Prediction in Multi-source Multi-target Domain Adaptation »
Yuyang Deng · Ilja Kuzborskij · Mehrdad Mahdavi -
2023 Poster: Distributed Personalized Empirical Risk Minimization »
Yuyang Deng · Mohammad Mahdi Kamani · Pouria Mahdavinia · Mehrdad Mahdavi -
2023 Poster: Beyond Lipschitz Smoothness: A New Approach to Convex and Non-Convex Optimization »
Haochuan Li · Jian Qian · Yi Tian · Ali Jadbabaie · Alexander Rakhlin -
2021 Poster: Complexity Lower Bounds for Nonconvex-Strongly-Concave Min-Max Optimization »
Haochuan Li · Yi Tian · Jingzhao Zhang · Ali Jadbabaie -
2020 Poster: Online Structured Meta-learning »
Huaxiu Yao · Yingbo Zhou · Mehrdad Mahdavi · Zhenhui (Jessie) Li · Richard Socher · Caiming Xiong -
2020 Poster: GCN meets GPU: Decoupling “When to Sample” from “How to Sample” »
Morteza Ramezani · Weilin Cong · Mehrdad Mahdavi · Anand Sivasubramaniam · Mahmut Kandemir -
2020 Poster: Distributionally Robust Federated Averaging »
Yuyang Deng · Mohammad Mahdi Kamani · Mehrdad Mahdavi -
2019 Poster: Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization »
Farzin Haddadpour · Mohammad Mahdi Kamani · Mehrdad Mahdavi · Viveck Cadambe -
2019 Poster: Convergence of Adversarial Training in Overparametrized Neural Networks »
Ruiqi Gao · Tianle Cai · Haochuan Li · Cho-Jui Hsieh · Liwei Wang · Jason Lee -
2019 Spotlight: Convergence of Adversarial Training in Overparametrized Neural Networks »
Ruiqi Gao · Tianle Cai · Haochuan Li · Cho-Jui Hsieh · Liwei Wang · Jason Lee