Timezone: »
Poster
A Single-Loop Smoothed Gradient Descent-Ascent Algorithm for Nonconvex-Concave Min-Max Problems
Jiawei Zhang · Peijun Xiao · Ruoyu Sun · Zhiquan Luo
Nonconvex-concave min-max problem arises in many machine learning applications including minimizing a pointwise maximum of a set of nonconvex functions and robust adversarial training of neural networks. A popular approach to solve this problem is the gradient descent-ascent (GDA) algorithm which unfortunately can exhibit oscillation in case of nonconvexity. In this paper, we introduce a ``smoothing" scheme which can be combined with GDA to stabilize the oscillation and ensure convergence to a stationary solution. We prove that the stabilized GDA algorithm can achieve an $O(1/\epsilon^2)$ iteration complexity for minimizing the pointwise maximum of a finite collection of nonconvex functions. Moreover, the smoothed GDA algorithm achieves an $O(1/\epsilon^4)$ iteration complexity for general nonconvex-concave problems. Extensions of this stabilized GDA algorithm to multi-block cases are presented. To the best of our knowledge, this is the first algorithm to achieve $O(1/\epsilon^2)$ for a class of nonconvex-concave problem. We illustrate the practical efficiency of the stabilized GDA algorithm on robust training.
Author Information
Jiawei Zhang (The Chinese University of Hong Kong, Shenzhen)
Peijun Xiao (University of Illinois at Urbana-Champaign (UIUC))
Ruoyu Sun (University of Illinois at Urbana-Champaign)
Zhiquan Luo (The Chinese University of Hong Kong, Shenzhen and Shenzhen Research Institute of Big Data)
More from the Same Authors
-
2021 : Model-based Distributional Reinforcement Learning for Risk-sensitive Control »
Hao Liang · Zhiquan Luo -
2021 : HyperDQN: A Randomized Exploration Method for Deep Reinforcement Learning »
Ziniu Li · Yingru Li · Yushun Zhang · Tong Zhang · Zhiquan Luo -
2021 : HyperDQN: A Randomized Exploration Method for Deep Reinforcement Learning »
Ziniu Li · Yingru Li · Yushun Zhang · Tong Zhang · Zhiquan Luo -
2021 Poster: Faster Directional Convergence of Linear Neural Networks under Spherically Symmetric Data »
Dachao Lin · Ruoyu Sun · Zhihua Zhang -
2021 Poster: When Expressivity Meets Trainability: Fewer than $n$ Neurons Can Work »
Jiawei Zhang · Yushun Zhang · Mingyi Hong · Ruoyu Sun · Zhi-Quan Luo -
2020 Poster: Towards a Better Global Loss Landscape of GANs »
Ruoyu Sun · Tiantian Fang · Alex Schwing -
2020 Oral: Towards a Better Global Loss Landscape of GANs »
Ruoyu Sun · Tiantian Fang · Alex Schwing -
2018 Poster: Adding One Neuron Can Eliminate All Bad Local Minima »
SHIYU LIANG · Ruoyu Sun · Jason Lee · R. Srikant