Timezone: »
Accelerated Algorithms for Monotone Inclusion and Constrained Nonconvex-Nonconcave Min-Max Optimization
Yang Cai · Argyris Oikonomou · Weiqiang Zheng
Event URL: https://openreview.net/forum?id=SfjzfqUAsiw »
We study monotone inclusions and monotone variational inequalities, as well as their generalizations to non-monotone settings. We first show that the \emph{Extra Anchored Gradient (EAG)} algorithm, originally proposed by [Yoon and Ryu, 2021] for unconstrained convex-concave min-max optimization, can be applied to solve the more general problem of Lipschitz monotone inclusion. More specifically, we prove that the EAG solves Lipschitz monotone inclusion problems with an \emph{accelerated convergence rate} of $O(\frac{1}{T})$, which is \emph{optimal among all first-order methods} [Diakonikolas, 2020, Yoon and Ryu, 2021]. Our second result is an {accelerated forward-backward splitting algorithm (AS),} which not only achieves the accelerated $O(\frac{1}{T})$ convergence rate for all monotone inclusion problems, but also exhibits the same accelerated rate for a family of general (non-monotone) inclusion problems that concern negative comonotone operators. As a special case of our second result, AS enjoys the $O(\frac{1}{T})$ convergence rate for solving a non-trivial class of nonconvex-nonconcave min-max optimization problems. Our analyses are based on simple potential function arguments, which might be useful for analysing other accelerated algorithms.
We study monotone inclusions and monotone variational inequalities, as well as their generalizations to non-monotone settings. We first show that the \emph{Extra Anchored Gradient (EAG)} algorithm, originally proposed by [Yoon and Ryu, 2021] for unconstrained convex-concave min-max optimization, can be applied to solve the more general problem of Lipschitz monotone inclusion. More specifically, we prove that the EAG solves Lipschitz monotone inclusion problems with an \emph{accelerated convergence rate} of $O(\frac{1}{T})$, which is \emph{optimal among all first-order methods} [Diakonikolas, 2020, Yoon and Ryu, 2021]. Our second result is an {accelerated forward-backward splitting algorithm (AS),} which not only achieves the accelerated $O(\frac{1}{T})$ convergence rate for all monotone inclusion problems, but also exhibits the same accelerated rate for a family of general (non-monotone) inclusion problems that concern negative comonotone operators. As a special case of our second result, AS enjoys the $O(\frac{1}{T})$ convergence rate for solving a non-trivial class of nonconvex-nonconcave min-max optimization problems. Our analyses are based on simple potential function arguments, which might be useful for analysing other accelerated algorithms.
Author Information
Yang Cai (Yale University)
Argyris Oikonomou (Yale University)

Argyris Oikonomou is a PhD Student in the department of Computer Science at Yale, advised by Yang Cai. Argyris completed his undergraduate studies at the National Technical University of Athens. His research interests are in mechanism design and algorithmic game theory.
Weiqiang Zheng (Yale University)
More from the Same Authors
-
2021 : Nash Convergence of Mean-Based Learning Algorithms in First Price Auctions »
Xiaotie Deng · Xinyan Hu · Tao Lin · Weiqiang Zheng -
2021 : Nash Convergence of Mean-Based Learning Algorithms in First Price Auctions »
Xiaotie Deng · Xinyan Hu · Tao Lin · Weiqiang Zheng -
2022 : Accelerated Single-Call Methods for Constrained Min-Max Optimization »
Yang Cai · Weiqiang Zheng -
2023 Poster: Uncoupled and Convergent Learning in Two-Player Zero-Sum Markov Games »
Yang Cai · Haipeng Luo · Chen-Yu Wei · Weiqiang Zheng -
2022 Panel: Panel 2A-4: Uncoupled Learning Dynamics… & Finite-Time Last-Iterate Convergence… »
Yang Cai · Ioannis Anagnostides -
2022 : Poster Session 2 »
Jinwuk Seok · Bo Liu · Ryotaro Mitsuboshi · David Martinez-Rubio · Weiqiang Zheng · Ilgee Hong · Chen Fan · Kazusato Oko · Bo Tang · Miao Cheng · Aaron Defazio · Tim G. J. Rudner · Gabriele Farina · Vishwak Srinivasan · Ruichen Jiang · Peng Wang · Jane Lee · Nathan Wycoff · Nikhil Ghosh · Yinbin Han · David Mueller · Liu Yang · Amrutha Varshini Ramesh · Siqi Zhang · Kaifeng Lyu · David Yunis · Kumar Kshitij Patel · Fangshuo Liao · Dmitrii Avdiukhin · Xiang Li · Sattar Vakili · Jiaxin Shi -
2022 : Poster Session 1 »
Andrew Lowy · Thomas Bonnier · Yiling Xie · Guy Kornowski · Simon Schug · Seungyub Han · Nicolas Loizou · xinwei zhang · Laurent Condat · Tabea E. Röber · Si Yi Meng · Marco Mondelli · Runlong Zhou · Eshaan Nichani · Adrian Goldwaser · Rudrajit Das · Kayhan Behdin · Atish Agarwala · Mukul Gagrani · Gary Cheng · Tian Li · Haoran Sun · Hossein Taheri · Allen Liu · Siqi Zhang · Dmitrii Avdiukhin · Bradley Brown · Miaolan Xie · Junhyung Lyle Kim · Sharan Vaswani · Xinmeng Huang · Ganesh Ramachandra Kini · Angela Yuan · Weiqiang Zheng · Jiajin Li -
2022 Poster: Finite-Time Last-Iterate Convergence for Learning in Multi-Player Games »
Yang Cai · Argyris Oikonomou · Weiqiang Zheng -
2018 Poster: Learning Safe Policies with Expert Guidance »
Jessie Huang · Fa Wu · Doina Precup · Yang Cai -
2017 : Spotlights »
Antti Kangasrääsiö · Richard Everett · Yitao Liang · Yang Cai · Steven Wu · Vidya Muthukumar · Sven Schmit