Timezone: »
We consider a new class of huge-scale problems, the problems with sparse subgradients. The most important functions of this type are piece-wise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of subgradient iterations, which total cost depends logarithmically in the dimension. This technique is based on a recursive update of the results of matrix/vector products and the values of symmetric functions. It works well, for example, for matrices with few nonzero diagonals and for max-type functions.
We show that the updating technique can be efficiently coupled only with the simplest subgradient methods. Similar results can be obtained for a new nonsmooth random variant of a coordinate descent scheme. We present also the promising results of preliminary computational experiments.
Author Information
Yurii Nesterov (Catholic University of Louvain (UCL))
Yurii Nesterov is a professor at Center for Operations Research and Econometrics (CORE) in Catholic University of Louvain (UCL), Belgium. He received Ph.D. degree (Applied Mathematics) in 1984 at Institute of Control Sciences, Moscow. Starting from 1993 he works at CORE. His research interests are related to complexity issues and efficient methods for solving various optimization problems. The main results are obtained in Convex Optimization (optimal methods for smooth problems, polynomial-time interior-point methods, smoothing technique for structural optimization, cubic regularization of Newton method, optimization methods for huge-scale problems). He is an author of 4 monographs and more than 80 refereed papers in the leading optimization journals. He got the Dantzig Prize from SIAM and Mathematical Programming society in 2000, von Neumann Theory Prize from INFORMS in 2009, and SIAM Outstanding paper award in 2014.
More from the Same Authors
-
2020 Poster: Convex optimization based on global lower second-order models »
Nikita Doikov · Yurii Nesterov -
2020 Oral: Convex optimization based on global lower second-order models »
Nikita Doikov · Yurii Nesterov -
2016 Poster: Learning Supervised PageRank with Gradient-Based and Gradient-Free Optimization Methods »
Lev Bogolubsky · Pavel Dvurechenskii · Alexander Gasnikov · Gleb Gusev · Yurii Nesterov · Andrei M Raigorodskii · Aleksey Tikhonov · Maksim Zhukovskii