Timezone: »
Poster
Zeroth-Order Hard-Thresholding: Gradient Error vs. Expansivity
William de Vazelhes · Hualin Zhang · Huimin Wu · Xiaotong Yuan · Bin Gu
$\ell_0$ constrained optimization is prevalent in machine learning, particularly for high-dimensional problems, because it is a fundamental approach to achieve sparse learning. Hard-thresholding gradient descent is a dominant technique to solve this problem. However, first-order gradients of the objective function may be either unavailable or expensive to calculate in a lot of real-world problems, where zeroth-order (ZO) gradients could be a good surrogate. Unfortunately, whether ZO gradients can work with the hard-thresholding operator is still an unsolved problem.To solve this puzzle, in this paper, we focus on the $\ell_0$ constrained black-box stochastic optimization problems, and propose a new stochastic zeroth-order gradient hard-thresholding (SZOHT) algorithm with a general ZO gradient estimator powered by a novel random support sampling. We provide the convergence analysis of SZOHT under standard assumptions. Importantly, we reveal a conflict between the deviation of ZO estimators and the expansivity of the hard-thresholding operator, and provide a theoretical minimal value of the number of random directions in ZO gradients. In addition, we find that the query complexity of SZOHT is independent or weakly dependent on the dimensionality under different settings. Finally, we illustrate the utility of our method on a portfolio optimization problem as well as black-box adversarial attacks.
Author Information
William de Vazelhes (Mohamed bin Zayed University of Artificial Intelligence)
PhD student at MBZUAI, working on optimization, in particular sparse optimization (hard-thresholding), and zeroth-order optimization
Hualin Zhang (NUIST)
Huimin Wu (Nanjing University of Information Science and Technology)
Xiaotong Yuan (Nanjing University of Information Science and Technology)
Bin Gu (Pittsburgh University)
More from the Same Authors
-
2021 Spotlight: A Theory-Driven Self-Labeling Refinement Method for Contrastive Representation Learning »
Pan Zhou · Caiming Xiong · Xiaotong Yuan · Steven Chu Hong Hoi -
2022 Poster: GAGA: Deciphering Age-path of Generalized Self-paced Regularizer »
Xingyu Qu · Diyang Li · Xiaohan Zhao · Bin Gu -
2022 : An Accuracy Guaranteed Online Solver for Learning in Dynamic Feature Space »
Diyang Li · Bin Gu -
2022 Spotlight: Lightning Talks 4A-3 »
Zhihan Gao · Yabin Wang · Xingyu Qu · Luziwei Leng · Mingqing Xiao · Bohan Wang · Yu Shen · Zhiwu Huang · Xingjian Shi · Qi Meng · Yupeng Lu · Diyang Li · Qingyan Meng · Kaiwei Che · Yang Li · Hao Wang · Huishuai Zhang · Zongpeng Zhang · Kaixuan Zhang · Xiaopeng Hong · Xiaohan Zhao · Di He · Jianguo Zhang · Yaofeng Tu · Bin Gu · Yi Zhu · Ruoyu Sun · Yuyang (Bernie) Wang · Zhouchen Lin · Qinghu Meng · Wei Chen · Wentao Zhang · Bin CUI · Jie Cheng · Zhi-Ming Ma · Mu Li · Qinghai Guo · Dit-Yan Yeung · Tie-Yan Liu · Jianxing Liao -
2022 Spotlight: Lightning Talks 4A-2 »
Barakeel Fanseu Kamhoua · Hualin Zhang · Taiki Miyagawa · Tomoya Murata · Xin Lyu · Yan Dai · Elena Grigorescu · Zhipeng Tu · Lijun Zhang · Taiji Suzuki · Wei Jiang · Haipeng Luo · Lin Zhang · Xi Wang · Young-San Lin · Huan Xiong · Liyu Chen · Bin Gu · Jinfeng Yi · Yongqiang Chen · Sandeep Silwal · Yiguang Hong · Maoyuan Song · Lei Wang · Tianbao Yang · Han Yang · MA Kaili · Samson Zhou · Deming Yuan · Bo Han · Guodong Shi · Bo Li · James Cheng -
2022 Spotlight: Zeroth-Order Negative Curvature Finding: Escaping Saddle Points without Gradients »
Hualin Zhang · Huan Xiong · Bin Gu -
2022 Spotlight: GAGA: Deciphering Age-path of Generalized Self-paced Regularizer »
Xingyu Qu · Diyang Li · Xiaohan Zhao · Bin Gu -
2022 Poster: On Convergence of FedProx: Local Dissimilarity Invariant Bounds, Non-smoothness and Beyond »
Xiaotong Yuan · Ping Li -
2022 Poster: Zeroth-Order Negative Curvature Finding: Escaping Saddle Points without Gradients »
Hualin Zhang · Huan Xiong · Bin Gu -
2021 Poster: Towards Understanding Why Lookahead Generalizes Better Than SGD and Beyond »
Pan Zhou · Hanshu Yan · Xiaotong Yuan · Jiashi Feng · Shuicheng Yan -
2021 Poster: A Theory-Driven Self-Labeling Refinement Method for Contrastive Representation Learning »
Pan Zhou · Caiming Xiong · Xiaotong Yuan · Steven Chu Hong Hoi -
2018 Poster: New Insight into Hybrid Stochastic Gradient Descent: Beyond With-Replacement Sampling and Convexity »
Pan Zhou · Xiaotong Yuan · Jiashi Feng -
2018 Poster: Efficient Stochastic Gradient Hard Thresholding »
Pan Zhou · Xiaotong Yuan · Jiashi Feng -
2018 Poster: Training Neural Networks Using Features Replay »
Zhouyuan Huo · Bin Gu · Heng Huang -
2018 Spotlight: Training Neural Networks Using Features Replay »
Zhouyuan Huo · Bin Gu · Heng Huang -
2016 Poster: Exact Recovery of Hard Thresholding Pursuit »
Xiaotong Yuan · Ping Li · Tong Zhang -
2016 Poster: Learning Additive Exponential Family Graphical Models via $\ell_{2,1}$-norm Regularized M-Estimation »
Xiaotong Yuan · Ping Li · Tong Zhang · Qingshan Liu · Guangcan Liu