Timezone: »
Many problems in machine learning can be solved by rounding the solution of an appropriate linear program. We propose a scheme that is based on a quadratic program relaxation which allows us to use parallel stochastic-coordinate-descent to approximately solve large linear programs efficiently. Our software is an order of magnitude faster than Cplex (a commercial linear programming solver) and yields similar solution quality. Our results include a novel perturbation analysis of a quadratic-penalty formulation of linear programming and a convergence result, which we use to derive running time and quality guarantees.
Author Information
Srikrishna Sridhar (UW-Madison)
Stephen Wright (UW-Madison)
Steve Wright is a Professor of Computer Sciences at the University of Wisconsin-Madison. His research interests lie in computational optimization and its applications to science and engineering. Prior to joining UW-Madison in 2001, Wright was a Senior Computer Scientist (1997-2001) and Computer Scientist (1990-1997) at Argonne National Laboratory, and Professor of Computer Science at the University of Chicago (2000-2001). He is the past Chair of the Mathematical Optimization Society (formerly the Mathematical Programming Society), the leading professional society in optimization, and a member of the Board of the Society for Industrial and Applied Mathematics (SIAM). Wright is the author or co-author of four widely used books in numerical optimization, including "Primal Dual Interior-Point Methods" (SIAM, 1997) and "Numerical Optimization" (with J. Nocedal, Second Edition, Springer, 2006). He has also authored over 85 refereed journal papers on optimization theory, algorithms, software, and applications. He is coauthor of widely used interior-point software for linear and quadratic optimization. His recent research includes algorithms, applications, and theory for sparse optimization (including applications in compressed sensing and machine learning).
Christopher Re (UW-Madison)
Ji Liu (Kwai Inc.)
Victor Bittorf (UW-Madison)
Ce Zhang (Wisconsin)
More from the Same Authors
-
2022 : BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach »
Mao Ye · Bo Liu · Stephen Wright · Peter Stone · Qiang Liu -
2022 Poster: Improving Certified Robustness via Statistical Learning with Logical Reasoning »
Zhuolin Yang · Zhikuan Zhao · Boxin Wang · Jiawei Zhang · Linyi Li · Hengzhi Pei · Bojan Karlaš · Ji Liu · Heng Guo · Ce Zhang · Bo Li -
2022 Poster: BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach »
Bo Liu · Mao Ye · Stephen Wright · Peter Stone · Qiang Liu -
2022 Poster: Coordinate Linear Variance Reduction for Generalized Linear Programming »
Chaobing Song · Cheuk Yin Lin · Stephen Wright · Jelena Diakonikolas -
2021 Poster: ErrorCompensatedX: error compensation for variance reduced algorithms »
Hanlin Tang · Yao Li · Ji Liu · Ming Yan -
2021 Poster: TNASP: A Transformer-based NAS Predictor with a Self-evolution Framework »
Shun Lu · Jixiang Li · Jianchao Tan · Sen Yang · Ji Liu -
2021 Poster: Shifted Chunk Transformer for Spatio-Temporal Representational Learning »
Xuefan Zha · Wentao Zhu · Lv Xun · Sen Yang · Ji Liu -
2020 Oral: Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent »
Benjamin Recht · Christopher Ré · Stephen Wright · Feng Niu -
2020 Poster: Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free »
Haotao Wang · Tianlong Chen · Shupeng Gui · TingKuei Hu · Ji Liu · Zhangyang Wang -
2019 : Second-order methods for nonconvex optimization with complexity guarantees »
Stephen Wright -
2019 Poster: Efficient Smooth Non-Convex Stochastic Compositional Optimization via Stochastic Recursive Gradient Descent »
Wenqing Hu · Chris Junchi Li · Xiangru Lian · Ji Liu · Angela Yuan -
2019 Poster: Global Sparse Momentum SGD for Pruning Very Deep Neural Networks »
Xiaohan Ding · guiguang ding · Xiangxin Zhou · Yuchen Guo · Jungong Han · Ji Liu -
2019 Poster: LIIR: Learning Individual Intrinsic Reward in Multi-Agent Reinforcement Learning »
Yali Du · Lei Han · Meng Fang · Ji Liu · Tianhong Dai · Dacheng Tao -
2019 Poster: Model Compression with Adversarial Robustness: A Unified Optimization Framework »
Shupeng Gui · Haotao Wang · Haichuan Yang · Chen Yu · Zhangyang Wang · Ji Liu -
2018 Poster: Communication Compression for Decentralized Training »
Hanlin Tang · Shaoduo Gan · Ce Zhang · Tong Zhang · Ji Liu -
2018 Poster: Stochastic Primal-Dual Method for Empirical Risk Minimization with O(1) Per-Iteration Complexity »
Conghui Tan · Tong Zhang · Shiqian Ma · Ji Liu -
2018 Poster: ATOMO: Communication-efficient Learning via Atomic Sparsification »
Hongyi Wang · Scott Sievert · Shengchao Liu · Zachary Charles · Dimitris Papailiopoulos · Stephen Wright -
2018 Poster: Gradient Sparsification for Communication-Efficient Distributed Optimization »
Jianqiao Wangni · Jialei Wang · Ji Liu · Tong Zhang -
2017 Poster: Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent »
Xiangru Lian · Ce Zhang · Huan Zhang · Cho-Jui Hsieh · Wei Zhang · Ji Liu -
2017 Oral: Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent »
Xiangru Lian · Ce Zhang · Huan Zhang · Cho-Jui Hsieh · Wei Zhang · Ji Liu -
2017 Poster: k-Support and Ordered Weighted Sparsity for Overlapping Groups: Hardness and Algorithms »
Cong Han Lim · Stephen Wright -
2016 Poster: Asynchronous Parallel Greedy Coordinate Descent »
Yang You · Xiangru Lian · Ji Liu · Hsiang-Fu Yu · Inderjit Dhillon · James Demmel · Cho-Jui Hsieh -
2016 Poster: Accelerating Stochastic Composition Optimization »
Mengdi Wang · Ji Liu · Ethan Fang -
2016 Poster: A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order »
Xiangru Lian · Huan Zhang · Cho-Jui Hsieh · Yijun Huang · Ji Liu -
2015 Poster: Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width »
Christopher M De Sa · Ce Zhang · Kunle Olukotun · Christopher Ré -
2015 Poster: Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization »
Xiangru Lian · Yijun Huang · Yuncheng Li · Ji Liu -
2015 Spotlight: Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width »
Christopher M De Sa · Ce Zhang · Kunle Olukotun · Christopher Ré · Christopher Ré -
2015 Spotlight: Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization »
Xiangru Lian · Yijun Huang · Yuncheng Li · Ji Liu -
2015 Poster: Taming the Wild: A Unified Analysis of Hogwild-Style Algorithms »
Christopher M De Sa · Ce Zhang · Kunle Olukotun · Christopher Ré · Christopher Ré -
2014 Poster: Beyond the Birkhoff Polytope: Convex Relaxations for Vector Permutation Problems »
Cong Han Lim · Stephen Wright -
2014 Poster: Exclusive Feature Learning on Arbitrary Structures via $\ell_{1,2}$-norm »
Deguang Kong · Ryohei Fujimaki · Ji Liu · Feiping Nie · Chris Ding -
2014 Poster: Parallel Feature Selection Inspired by Group Testing »
Yingbo Zhou · Utkarsh Porwal · Ce Zhang · Hung Q Ngo · XuanLong Nguyen · Christopher Ré · Venu Govindaraju -
2012 Workshop: Log-Linear Models »
Dimitri Kanevsky · Tony Jebara · Li Deng · Stephen Wright · Georg Heigold · Avishy Carmi -
2012 Poster: Factoring nonnegative matrices with linear programs »
Benjamin Recht · Christopher Re · Joel A Tropp · Victor Bittorf -
2012 Poster: Regularized Off-Policy TD-Learning »
Bo Liu · Sridhar Mahadevan · Ji Liu -
2012 Spotlight: Regularized Off-Policy TD-Learning »
Bo Liu · Sridhar Mahadevan · Ji Liu -
2012 Spotlight: Factoring nonnegative matrices with linear programs »
Benjamin Recht · Christopher Re · Joel A Tropp · Victor Bittorf -
2011 Workshop: Optimization for Machine Learning »
Suvrit Sra · Stephen Wright · Sebastian Nowozin -
2011 Poster: Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent »
Benjamin Recht · Christopher Re · Stephen Wright · Feng Niu -
2010 Workshop: Optimization for Machine Learning »
Suvrit Sra · Sebastian Nowozin · Stephen Wright -
2010 Tutorial: Optimization Algorithms in Machine Learning »
Stephen Wright -
2010 Poster: Multi-Stage Dantzig Selector »
Ji Liu · Peter Wonka · Jieping Ye -
2009 Workshop: Optimization for Machine Learning »
Sebastian Nowozin · Suvrit Sra · S.V.N Vishwanthan · Stephen Wright