Timezone: »

 
Poster Session 1 (gather.town)
Hamed Jalali · Robert Hönig · Maximus Mutschler · Manuel Madeira · Abdurakhmon Sadiev · Egor Shulgin · Alasdair Paren · Pascal Esser · Simon Roburin · Julius Kunze · Agnieszka Słowik · Frederik Benzing · Futong Liu · Hongyi Li · Ryotaro Mitsuboshi · Grigory Malinovsky · Jayadev Naram · Zhize Li · Igor Sokolov · Sharan Vaswani

Mon Dec 13 05:30 AM -- 06:30 AM (PST) @
Event URL: https://eventhosts.gather.town/iYbJRqT1DjawFX4u/poster-session-1 »

Please join us in gather.town (see link above). To see the abstracts of the posters presented in this session, please see below the schedule.

Authors/papers presenting posters in gather.town for this session:

  • Gaussian Graphical Models as an Ensemble Method for Distributed Gaussian Processes, Hamed Jalali

  • DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning, Robert Hönig

  • Using a one dimensional parabolic model of the full-batch loss to estimate learning rates during training, Maximus Mutschler

  • COCO Denoiser: Using Co-Coercivity for Variance Reduction in Stochastic Convex Optimization, Manuel Madeira

  • Decentralized Personalized Federated Learning: Lower Bounds and Optimal Algorithm for All Personalization Modes, Abdurakhmon Sadiev

  • Shifted Compression Framework: Generalizations and Improvements, Egor Shulgin

  • Faking Interpolation Until You Make It, Alasdair J Paren

  • Towards Modeling and Resolving Singular Parameter Spaces using Stratifolds, Pascal M Esser

  • Spherical Perspective on Learning with Normalization Layers, Simon W Roburin

  • Adaptive Optimization with Examplewise Gradients, Julius Kunze

  • On the Relation between Distributionally Robust Optimization and Data Curation, Agnieszka Słowik

  • Fast, Exact Subsampled Natural Gradients and First-Order KFAC, Frederik Benzing

  • Understanding Memorization from the Perspective of Optimization via Efficient Influence Estimation, Futong Liu

  • Community-based Layerwise Distributed Training of Graph Convolutional Networks, Hongyi Li

  • A New Scheme for Boosting with an Avarage Margin Distribution Oracle, Ryotaro Mitsuboshi

  • Better Linear Rates for SGD with Data Shuffling, Grigory Malinovsky

  • Structured Low-Rank Tensor Learning, Jayadev Naram

  • ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method, Zhize Li

  • EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback, Igor Sokolov

  • On Server-Side Stepsizes in Federated Optimization: Theory Explaining the Heuristics, Grigory Malinovsky

Author Information

Hamed Jalali (University of Tuebingen)
Robert Hönig (ETH Zürich)
Maximus Mutschler (University of Tübingen)
Manuel Madeira (Instituto Superior Técnico)
Abdurakhmon Sadiev (Moscow Institute of Physics and Technology)
Egor Shulgin (Samsung AI Cambridge, King Abdullah University of Science and Technology)

I am a PhD student in Computer Science at King Abdullah University of Science and Technology (KAUST) advised by [Peter Richtárik](https://richtarik.org/). Prior to that, I obtained BSc in Applied Mathematics, Computer Science, and Physics from Moscow Institute of Physics and Technology in 2019.

Alasdair Paren (University of Oxford)
Pascal Esser (Technical University of Munich)
Simon Roburin (ENPC; valeo.ai)
Julius Kunze (University College London)
Agnieszka Słowik (Department of Computer Science and Technology University of Cambridge)
Frederik Benzing (ETH Zurich)
Futong Liu (EPFL)
Hongyi Li (Xidian University)
Ryotaro Mitsuboshi (Kyushu University)
Grigory Malinovsky (King Abdullah University of Science and Technology)
Jayadev Naram (International Institute of Information Technology, Hyderabad)
Zhize Li (King Abdullah University of Science and Technology (KAUST))
Igor Sokolov (KAUST)
Sharan Vaswani (University of Alberta)

More from the Same Authors