Skip to yearly menu bar Skip to main content

Poster session
Workshop: OPT 2021: Optimization for Machine Learning

Poster Session 1 (

Hamed Jalali · Robert Hönig · Maximus Mutschler · Manuel Madeira · Abdurakhmon Sadiev · Egor Shulgin · Alasdair Paren · Pascal Esser · Simon Roburin · Julius Kunze · Agnieszka Słowik · Frederik Benzing · Futong Liu · Hongyi Li · Ryotaro Mitsuboshi · Grigory Malinovsky · Jayadev Naram · Zhize Li · Igor Sokolov · Sharan Vaswani


Please join us in (see link above). To see the abstracts of the posters presented in this session, please see below the schedule.

Authors/papers presenting posters in for this session:

  • Gaussian Graphical Models as an Ensemble Method for Distributed Gaussian Processes, Hamed Jalali

  • DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning, Robert Hönig

  • Using a one dimensional parabolic model of the full-batch loss to estimate learning rates during training, Maximus Mutschler

  • COCO Denoiser: Using Co-Coercivity for Variance Reduction in Stochastic Convex Optimization, Manuel Madeira

  • Decentralized Personalized Federated Learning: Lower Bounds and Optimal Algorithm for All Personalization Modes, Abdurakhmon Sadiev

  • Shifted Compression Framework: Generalizations and Improvements, Egor Shulgin

  • Faking Interpolation Until You Make It, Alasdair J Paren

  • Towards Modeling and Resolving Singular Parameter Spaces using Stratifolds, Pascal M Esser

  • Spherical Perspective on Learning with Normalization Layers, Simon W Roburin

  • Adaptive Optimization with Examplewise Gradients, Julius Kunze

  • On the Relation between Distributionally Robust Optimization and Data Curation, Agnieszka Słowik

  • Fast, Exact Subsampled Natural Gradients and First-Order KFAC, Frederik Benzing

  • Understanding Memorization from the Perspective of Optimization via Efficient Influence Estimation, Futong Liu

  • Community-based Layerwise Distributed Training of Graph Convolutional Networks, Hongyi Li

  • A New Scheme for Boosting with an Avarage Margin Distribution Oracle, Ryotaro Mitsuboshi

  • Better Linear Rates for SGD with Data Shuffling, Grigory Malinovsky

  • Structured Low-Rank Tensor Learning, Jayadev Naram

  • ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method, Zhize Li

  • EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback, Igor Sokolov

  • On Server-Side Stepsizes in Federated Optimization: Theory Explaining the Heuristics, Grigory Malinovsky