Timezone: »
The coordinate descent (CD) method is a classical optimization algorithm that has seen a revival of interest because of its competitive performance in machine learning applications. A number of recent papers provided convergence rate estimates for their deterministic (cyclic) and randomized variants that differ in the selection of update coordinates. These estimates suggest randomized coordinate descent (RCD) performs better than cyclic coordinate descent (CCD), although numerical experiments do not provide clear justification for this comparison. In this paper, we provide examples and more generally problem classes for which CCD (or CD with any deterministic order) is faster than RCD in terms of asymptotic worst-case convergence. Furthermore, we provide lower and upper bounds on the amount of improvement on the rate of CCD relative to RCD, which depends on the deterministic order used. We also provide a characterization of the best deterministic order (that leads to the maximum improvement in convergence rate) in terms of the combinatorial properties of the Hessian matrix of the objective function.
Author Information
Mert Gurbuzbalaban (Rutgers University)
Asuman Ozdaglar (Massachusetts Institute of Technology)
Asu Ozdaglar received the B.S. degree in electrical engineering from the Middle East Technical University, Ankara, Turkey, in 1996, and the S.M. and the Ph.D. degrees in electrical engineering and computer science from the Massachusetts Institute of Technology, Cambridge, in 1998 and 2003, respectively. She is currently a professor in the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology. She is also the director of the Laboratory for Information and Decision Systems. Her research expertise includes optimization theory, with emphasis on nonlinear programming and convex analysis, game theory, with applications in communication, social, and economic networks, distributed optimization and control, and network analysis with special emphasis on contagious processes, systemic risk and dynamic control. Professor Ozdaglar is the recipient of a Microsoft fellowship, the MIT Graduate Student Council Teaching award, the NSF Career award, the 2008 Donald P. Eckman award of the American Automatic Control Council, the Class of 1943 Career Development Chair, the inaugural Steven and Renee Innovation Fellowship, and the 2014 Spira teaching award. She served on the Board of Governors of the Control System Society in 2010 and was an associate editor for IEEE Transactions on Automatic Control. She is currently the area co-editor for a new area for the journal Operations Research, entitled "Games, Information and Networks. She is the co-author of the book entitled âConvex Analysis and Optimizationâ (Athena Scientific, 2003).
Pablo A Parrilo (Massachusetts Institute of Technology)
Nuri Vanli (Massachusetts Institute of Technology)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Spotlight: When Cyclic Coordinate Descent Outperforms Randomized Coordinate Descent »
Tue. Dec 5th 07:55 -- 08:00 PM Room Hall C
More from the Same Authors
-
2021 Spotlight: Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms »
Alexander Camuto · George Deligiannidis · Murat Erdogdu · Mert Gurbuzbalaban · Umut Simsekli · Lingjiong Zhu -
2022 : Smoothed-SGDmax: A Stability-Inspired Algorithm to Improve Adversarial Generalization »
Jiancong Xiao · Jiawei Zhang · Zhiquan Luo · Asuman Ozdaglar -
2022 Poster: What is a Good Metric to Study Generalization of Minimax Learners? »
Asuman Ozdaglar · Sarath Pattathil · Jiawei Zhang · Kaiqing Zhang -
2022 Poster: Bridging Central and Local Differential Privacy in Data Acquisition Mechanisms »
Alireza Fallah · Ali Makhdoumi · azarakhsh malekian · Asuman Ozdaglar -
2021 : Q&A with Professor Asu Ozdaglar »
Asuman Ozdaglar -
2021 : Keynote Talk: Personalization in Federated Learning: Adaptation and Clustering (Asu Ozdaglar) »
Asuman Ozdaglar -
2021 Poster: Decentralized Q-learning in Zero-sum Markov Games »
Muhammed Sayin · Kaiqing Zhang · David Leslie · Tamer Basar · Asuman Ozdaglar -
2021 Poster: Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance »
Hongjian Wang · Mert Gurbuzbalaban · Lingjiong Zhu · Umut Simsekli · Murat Erdogdu -
2021 Poster: Generalization of Model-Agnostic Meta-Learning Algorithms: Recurring and Unseen Tasks »
Alireza Fallah · Aryan Mokhtari · Asuman Ozdaglar -
2021 Poster: On the Convergence Theory of Debiased Model-Agnostic Meta-Reinforcement Learning »
Alireza Fallah · Kristian Georgiev · Aryan Mokhtari · Asuman Ozdaglar -
2021 Poster: Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms »
Alexander Camuto · George Deligiannidis · Murat Erdogdu · Mert Gurbuzbalaban · Umut Simsekli · Lingjiong Zhu -
2020 Poster: Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach »
Alireza Fallah · Aryan Mokhtari · Asuman Ozdaglar -
2019 Poster: A Universally Optimal Multistage Accelerated Stochastic Gradient Method »
Necdet Serhat Aybat · Alireza Fallah · Mert Gurbuzbalaban · Asuman Ozdaglar -
2018 Poster: Escaping Saddle Points in Constrained Optimization »
Aryan Mokhtari · Asuman Ozdaglar · Ali Jadbabaie -
2018 Spotlight: Escaping Saddle Points in Constrained Optimization »
Aryan Mokhtari · Asuman Ozdaglar · Ali Jadbabaie -
2015 Invited Talk: Incremental Methods for Additive Cost Convex Optimization »
Asuman Ozdaglar -
2013 Poster: Computing the Stationary Distribution Locally »
Christina Lee · Asuman Ozdaglar · Devavrat Shah