Timezone: »
The alternating direction method of multipliers (ADMM) is an important tool for solving complex optimization problems, but it involves minimization sub-steps that are often difficult to solve efficiently. The Primal-Dual Hybrid Gradient (PDHG) method is a powerful alternative that often has simpler substeps than ADMM, thus producing lower complexity solvers. Despite the flexibility of this method, PDHG is often impractical because it requires the careful choice of multiple stepsize parameters. There is often no intuitive way to choose these parameters to maximize efficiency, or even achieve convergence. We propose self-adaptive stepsize rules that automatically tune PDHG parameters for optimal convergence. We rigorously analyze our methods, and identify convergence rates. Numerical experiments show that adaptive PDHG has strong advantages over non-adaptive methods in terms of both efficiency and simplicity for the user.
Author Information
Tom Goldstein (University of Maryland)
Min Li (Southeast University)
Xiaoming Yuan (Hong Kong Baptist University)
More from the Same Authors
-
2020 : An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process »
David Tran · Alex Valtchanov · Keshav R Ganapathy · Raymond Feng · Eric Slud · Micah Goldblum · Tom Goldstein -
2021 : Execute Order 66: Targeted Data Poisoning for Reinforcement Learning via Minuscule Perturbations »
Harrison Foley · Liam Fowl · Tom Goldstein · Gavin Taylor -
2020 : The Intrinsic Dimension of Images and Its Impact on Learning »
Chen Zhu · Micah Goldblum · Ahmed Abdelkader · Tom Goldstein · Phillip Pope -
2020 Workshop: Workshop on Dataset Curation and Security »
Nathalie Baracaldo Angel · Yonatan Bisk · Avrim Blum · Michael Curry · John Dickerson · Micah Goldblum · Tom Goldstein · Bo Li · Avi Schwarzschild -
2020 Poster: Detection as Regression: Certified Object Detection with Median Smoothing »
Ping-yeh Chiang · Michael Curry · Ahmed Abdelkader · Aounon Kumar · John Dickerson · Tom Goldstein -
2020 Poster: Certifying Confidence via Randomized Smoothing »
Aounon Kumar · Alexander Levine · Soheil Feizi · Tom Goldstein -
2020 Poster: Adversarially Robust Few-Shot Learning: A Meta-Learning Approach »
Micah Goldblum · Liam Fowl · Tom Goldstein -
2020 Poster: MetaPoison: Practical General-purpose Clean-label Data Poisoning »
W. Ronny Huang · Jonas Geiping · Liam Fowl · Gavin Taylor · Tom Goldstein -
2020 Poster: Certifying Strategyproof Auction Networks »
Michael Curry · Ping-yeh Chiang · Tom Goldstein · John Dickerson -
2019 Poster: Adversarial training for free! »
Ali Shafahi · Mahyar Najibi · Mohammad Amin Ghiasi · Zheng Xu · John Dickerson · Christoph Studer · Larry Davis · Gavin Taylor · Tom Goldstein -
2018 Poster: Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks »
Ali Shafahi · W. Ronny Huang · Mahyar Najibi · Octavian Suciu · Christoph Studer · Tudor Dumitras · Tom Goldstein -
2018 Poster: Visualizing the Loss Landscape of Neural Nets »
Hao Li · Zheng Xu · Gavin Taylor · Christoph Studer · Tom Goldstein -
2017 Poster: Training Quantized Nets: A Deeper Understanding »
Hao Li · Soham De · Zheng Xu · Christoph Studer · Hanan Samet · Tom Goldstein -
2015 : Spotlight »
Furong Huang · William Gray Roncal · Tom Goldstein