Timezone: »
Poster
Accelerated Zeroth-Order and First-Order Momentum Methods from Mini to Minimax Optimization
Feihu Huang · Shangqian Gao · Jian Pei · Heng Huang
In the paper, we propose a class of accelerated zeroth-order and first-order momentum methods for both nonconvex mini-optimization and minimax-optimization. Specifically, we propose a new accelerated zeroth-order momentum (Acc-ZOM) method for black-box mini-optimization where only function values can be obtained. Moreover, we prove that our Acc-ZOM method achieves a lower query complexity of $\tilde{O}(d^{3/4}\epsilon^{-3})$ for finding an $\epsilon$-stationary point, which improves the best known result by a factor of $O(d^{1/4})$ where $d$ denotes the variable dimension. In particular, our Acc-ZOM does not need large batches required in the existing zeroth-order stochastic algorithms. Meanwhile, we propose an accelerated zeroth-order momentum descent ascent (Acc-ZOMDA) method for black-box minimax optimization, where only function values can be obtained. Our Acc-ZOMDA obtains a low query complexity of $\tilde{O}((d_1+d_2)^{3/4}\kappa_y^{4.5}\epsilon^{-3})$ without requiring large batches for finding an $\epsilon$-stationary point, where $d_1$ and $d_2$ denote variable dimensions and $\kappa_y$ is condition number. Moreover, we propose an accelerated first-order momentum descent ascent (Acc-MDA) method for minimax optimization, whose explicit gradients are accessible. Our Acc-MDA achieves a low gradient complexity of $\tilde{O}(\kappa_y^{4.5}\epsilon^{-3})$ without requiring large batches for finding an $\epsilon$-stationary point. In particular, our Acc-MDA can obtain a lower gradient complexity of $\tilde{O}(\kappa_y^{2.5}\epsilon^{-3})$ with a batch size $O(\kappa_y^4)$, which improves the best known result by a factor of $O(\kappa_y^{1/2})$. Extensive experimental results on black-box adversarial attack to deep neural networks and poisoning attack to logistic regression demonstrate efficiency of our algorithms.
Author Information
Feihu Huang (University of Pittsburgh)
Shangqian Gao (University of Pittsburgh)
Jian Pei (Simon Fraser University)
Heng Huang
More from the Same Authors
-
2022 Poster: Revisiting Graph Contrastive Learning from the Perspective of Graph Spectrum »
Nian Liu · Xiao Wang · Deyu Bo · Chuan Shi · Jian Pei -
2022 Spotlight: Revisiting Graph Contrastive Learning from the Perspective of Graph Spectrum »
Nian Liu · Xiao Wang · Deyu Bo · Chuan Shi · Jian Pei -
2022 Spotlight: Lightning Talks 6A-1 »
Ziyi Wang · Nian Liu · Yaming Yang · Qilong Wang · Yuanxin Liu · Zongxin Yang · Yizhao Gao · Yanchen Deng · Dongze Lian · Nanyi Fei · Ziyu Guan · Xiao Wang · Shufeng Kong · Xumin Yu · Daquan Zhou · Yi Yang · Fandong Meng · Mingze Gao · Caihua Liu · Yongming Rao · Zheng Lin · Haoyu Lu · Zhe Wang · Jiashi Feng · Zhaolin Zhang · Deyu Bo · Xinchao Wang · Chuan Shi · Jiangnan Li · Jiangtao Xie · Jie Zhou · Zhiwu Lu · Wei Zhao · Bo An · Jiwen Lu · Peihua Li · Jian Pei · Hao Jiang · Cai Xu · Peng Fu · Qinghua Hu · Yijie Li · Weigang Lu · Yanan Cao · Jianbin Huang · Weiping Wang · Zhao Cao · Jie Zhou -
2022 Poster: Enhanced Bilevel Optimization via Bregman Distance »
Feihu Huang · Junyi Li · Shangqian Gao · Heng Huang -
2021 Poster: Optimal Underdamped Langevin MCMC Method »
Zhengmian Hu · Feihu Huang · Heng Huang -
2021 Poster: Robust Counterfactual Explanations on Graph Neural Networks »
Mohit Bajaj · Lingyang Chu · Zi Yu Xue · Jian Pei · Lanjun Wang · Peter Cho-Ho Lam · Yong Zhang -
2021 Poster: SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients »
Feihu Huang · Junyi Li · Heng Huang -
2021 Poster: Efficient Mirror Descent Ascent Methods for Nonsmooth Minimax Problems »
Feihu Huang · Xidong Wu · Heng Huang -
2021 Poster: A Faster Decentralized Algorithm for Nonconvex Minimax Problems »
Wenhan Xian · Feihu Huang · Yanfu Zhang · Heng Huang -
2019 Poster: Cross-Modal Learning with Adversarial Samples »
CHAO LI · Shangqian Gao · Cheng Deng · De Xie · Wei Liu