Skip to yearly menu bar Skip to main content


Poster

Achieving Linear Convergence with Parameter-Free Algorithms in Decentralized Optimization

Ilya Kuruzov · Gesualdo Scutari · Alexander Gasnikov

West Ballroom A-D #6105
[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

This paper addresses the minimization of the sum of strongly convex, smoothfunctions over a network of agents without a centralized server. Existing decentralized algorithms require knowledge of functions and network parameters, such as the Lipschitz constant of the global gradient and/or network connectivity, forhyperparameter tuning. Agents usually cannot access this information, leadingto conservative selections and slow convergence or divergence. This paper introduces a decentralized algorithm that eliminates the need for specific parametertuning. Our approach employs an operator splitting technique with a novel variablemetric, enabling a local backtracking line-search to adaptively select the stepsizewithout global information or extensive communications. This results in favorableconvergence guarantees and dependence on optimization and network parameterscompared to existing nonadaptive methods. Notably, our method is the first adaptive decentralized algorithm that achieves linear convergence for strongly convex,smooth objectives. Preliminary numerical experiments support our theoreticalfindings, demonstrating superior performance in convergence speed and scalability.

Live content is unavailable. Log in and register to view live content