Timezone: »
Please join us in gather.town for all breaks and poster sessions. Click on "Open Link" to join gather.town.
Author Information
Laurent Condat (KAUST)
Tiffany Vlaar (University of Edinburgh)
Ohad Shamir (Weizmann Institute of Science)
Mohammadi Zaki (Indian Institute of Science Bangalore)
Zhize Li (King Abdullah University of Science and Technology (KAUST))
Zhize Li is a Research Scientist at the King Abdullah University of Science and Technology (KAUST) since September 2020. He obtained his PhD degree in Computer Science from Tsinghua University in 2019 (Advisor: Prof. Jian Li). He was a postdoc at KAUST (Hosted by Prof. Peter Richtárik), a visiting scholar at Duke University (Hosted by Prof. Rong Ge), and a visiting scholar at Georgia Institute of Technology (Hosted by Prof. Guanghui (George) Lan).
Guan-Horng Liu (Georgia Institute of Technology)
Samuel Horváth (King Abdullah University of Science and Technology)
Mher Safaryan (KAUST)
Yoni Choukroun (Toga networks)
Kumar Shridhar (TU Kaiserslautern)
Nabil Kahale (ESCP Business School)
Nabil Kahalé is an associate professor at ESCP Business School in Paris. He graduated from Ecole Polytechnique with a B.S. in Engineering in 1987 and received his Ph.D. in theoretical Computer Science from MIT in 1993. Nabil Kahalé’s current research and teaching address risk management, the pricing of derivative securities, Monte Carlo simulation, and machine learning.
Jikai Jin (Peking University)
Pratik Kumar Jawanpuria (Microsoft)
Gaurav Kumar Yadav (Indian Institute of Technology, Madras)
Gaurav Kumar is a research scholar in the department of mechanical engineering at IIT Madras. He is currently working on the applications of Machine learning in the solution of Fluid-flow and heat transfer problem, under the guidance of Dr. Balaji Srinivasan.
Kazuki Koyama (NTT Communications Corp.)
Junyoung Kim (Department of Industrial Engineering, Seoul National University)
Xiao Li (The Chinese University of Hong Kong, Shenzhen)
Saugata Purkayastha (Assam Don Bosco University)
I completed my Ph.D. in Mathematics from Gauhati University in 2015. Presently I am working as an Assistant Professor in the department of Mathematics, Assam Don Bosco University, Assam, India. My main research interests are Algebraic structures, optimization theory in Machine Learning.
Adil Salim (KAUST)
Dighanchal Banerjee (Tata Consultancy Services)
Peter Richtarik (KAUST)
Lakshman Mahto (Indian Institute of Information Technology Dharwad)
I am working as an Assistant Professor (Mathematics) at Indian Institute of Information Technology Dharwad since August, 2016. My current research interests lie in the interface of optimization, statistical learning, and control with understanding and development of efficient optimization algorithms for machine intelligence, system dynamics and control. This often requires conceptual, innovations, and technical (for scientific inference and transforming a large amount of data-sets into useful information with better decisions in the face of uncertainty) breakthroughs along three different dimensions: 1. Optimization of convex and non-convex problems 2. Scalable algorithms that leverage a statistical model to improve its decision making and learning. 3. Data-driven control and optimization of dynamical processes
Tian Ye (Tsinghua University)
Bamdev Mishra (Microsoft)
Huikang Liu (Imperial College London)
Jiajie Zhu (Max Planck Institute for Intelligent Systems)
More from the Same Authors
-
2020 : Optimal Client Sampling for Federated Learning »
Samuel Horváth -
2021 Spotlight: Second-Order Neural ODE Optimizer »
Guan-Horng Liu · Tianrong Chen · Evangelos Theodorou -
2021 Spotlight: Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems »
Itay Safran · Ohad Shamir -
2021 Spotlight: FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout »
Samuel Horváth · Stefanos Laskaridis · Mario Almeida · Ilias Leontiadis · Stylianos Venieris · Nicholas Lane -
2021 : Better Linear Rates for SGD with Data Shuffling »
Grigory Malinovsky · Alibek Sailanbayev · Peter Richtarik -
2021 : Better Linear Rates for SGD with Data Shuffling »
Grigory Malinovsky · Alibek Sailanbayev · Peter Richtarik -
2021 : DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization »
Boyue Li · Zhize Li · Yuejie Chi -
2021 : DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization »
Boyue Li · Zhize Li · Yuejie Chi -
2021 : Shifted Compression Framework: Generalizations and Improvements »
Egor Shulgin · Peter Richtarik -
2021 : ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method »
Zhize Li -
2021 : EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback »
Peter Richtarik · Igor Sokolov · Ilyas Fatkhullin · Eduard Gorbunov · Zhize Li -
2021 : On Server-Side Stepsizes in Federated Optimization: Theory Explaining the Heuristics »
Grigory Malinovsky · Konstantin Mishchenko · Peter Richtarik -
2021 : FedMix: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning »
Elnur Gasanov · Ahmed Khaled Ragab Bayoumi · Samuel Horváth · Peter Richtarik -
2021 : FedMix: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning »
Elnur Gasanov · Ahmed Khaled Ragab Bayoumi · Samuel Horváth · Peter Richtarik -
2021 : Likelihood Training of Schrödinger Bridges using Forward-Backward SDEs Theory »
Tianrong Chen · Guan-Horng Liu · Evangelos Theodorou -
2021 : Q&A with Professor Peter Richtarik »
Peter Richtarik -
2021 : Keynote Talk: Permutation Compressors for Provably Faster Distributed Nonconvex Optimization (Peter Richtarik) »
Peter Richtarik -
2021 : Poster Session 1 (gather.town) »
Hamed Jalali · Robert Hönig · Maximus Mutschler · Manuel Madeira · Abdurakhmon Sadiev · Egor Shulgin · Alasdair Paren · Pascal Esser · Simon Roburin · Julius Kunze · Agnieszka Słowik · Frederik Benzing · Futong Liu · Hongyi Li · Ryotaro Mitsuboshi · Grigory Malinovsky · Jayadev Naram · Zhize Li · Igor Sokolov · Sharan Vaswani -
2021 Poster: Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization »
Mher Safaryan · Filip Hanzely · Peter Richtarik -
2021 Poster: FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout »
Samuel Horváth · Stefanos Laskaridis · Mario Almeida · Ilias Leontiadis · Stylianos Venieris · Nicholas Lane -
2021 Poster: EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback »
Peter Richtarik · Igor Sokolov · Ilyas Fatkhullin -
2021 Poster: Error Compensated Distributed SGD Can Be Accelerated »
Xun Qian · Peter Richtarik · Tong Zhang -
2021 Poster: Non-convex Distributionally Robust Optimization: Non-asymptotic Analysis »
Jikai Jin · Bohang Zhang · Haiyang Wang · Liwei Wang -
2021 Poster: Learning a Single Neuron with Bias Using Gradient Descent »
Gal Vardi · Gilad Yehudai · Ohad Shamir -
2021 Poster: CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression »
Zhize Li · Peter Richtarik -
2021 Poster: Oracle Complexity in Nonsmooth Nonconvex Optimization »
Guy Kornowski · Ohad Shamir -
2021 Poster: A Stochastic Newton Algorithm for Distributed Convex Optimization »
Brian Bullins · Kshitij Patel · Ohad Shamir · Nathan Srebro · Blake Woodworth -
2021 Poster: Second-Order Neural ODE Optimizer »
Guan-Horng Liu · Tianrong Chen · Evangelos Theodorou -
2021 Oral: Oracle Complexity in Nonsmooth Nonconvex Optimization »
Guy Kornowski · Ohad Shamir -
2021 Poster: Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks »
Dmitry Kovalev · Elnur Gasanov · Alexander Gasnikov · Peter Richtarik -
2021 Poster: On Riemannian Optimization over Positive Definite Matrices with the Bures-Wasserstein Geometry »
Andi Han · Bamdev Mishra · Pratik Kumar Jawanpuria · Junbin Gao -
2021 Poster: Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems »
Itay Safran · Ohad Shamir -
2021 Oral: EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback »
Peter Richtarik · Igor Sokolov · Ilyas Fatkhullin -
2020 : Contributed talks in Session 2 (Zoom) »
Martin Takac · Samuel Horváth · Guan-Horng Liu · Nicolas Loizou · Sharan Vaswani -
2020 : Contributed Video: Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization, Samuel Horvath »
Samuel Horváth -
2020 : Contributed Video: DDPNOpt: Differential Dynamic Programming Neural Optimizer, Guan-Horng Liu »
Guan-Horng Liu -
2020 : Contributed talks in Session 1 (Zoom) »
Sebastian Stich · Laurent Condat · Zhize Li · Ohad Shamir · Tiffany Vlaar · Mohammadi Zaki -
2020 : Contributed Video: Constraint-Based Regularization of Neural Networks, Tiffany Vlaar »
Tiffany Vlaar -
2020 : Contributed Video: Can We Find Near-Approximately-Stationary Points of Nonsmooth Nonconvex Functions?, Ohad Shamir »
Ohad Shamir -
2020 : Contributed Video: Employing No Regret Learners for Pure Exploration in Linear Bandits, Mohammadi Zaki »
Mohammadi Zaki -
2020 : Contributed Video: Distributed Proximal Splitting Algorithms with Rates and Acceleration, Laurent Condat »
Laurent Condat -
2020 : Contributed Video: PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization, Zhize Li »
Zhize Li -
2020 Poster: Improved Analysis of Clipping Algorithms for Non-convex Optimization »
Bohang Zhang · Jikai Jin · Cong Fang · Liwei Wang -
2020 Poster: A Non-Asymptotic Analysis for Stein Variational Gradient Descent »
Anna Korba · Adil Salim · Michael Arbel · Giulia Luise · Arthur Gretton -
2020 Poster: Neural Networks with Small Weights and Depth-Separation Barriers »
Gal Vardi · Ohad Shamir -
2020 Poster: Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin Algorithm »
Adil Salim · Peter Richtarik -
2020 Poster: The Wasserstein Proximal Gradient Algorithm »
Adil Salim · Anna Korba · Giulia Luise -
2020 Poster: Linearly Converging Error Compensated SGD »
Eduard Gorbunov · Dmitry Kovalev · Dmitry Makarenko · Peter Richtarik -
2020 Poster: Statistical Optimal Transport posed as Learning Kernel Embedding »
Saketha Nath Jagarlapudi · Pratik Kumar Jawanpuria -
2020 Poster: Random Reshuffling: Simple Analysis with Vast Improvements »
Konstantin Mishchenko · Ahmed Khaled Ragab Bayoumi · Peter Richtarik -
2020 Spotlight: Linearly Converging Error Compensated SGD »
Eduard Gorbunov · Dmitry Kovalev · Dmitry Makarenko · Peter Richtarik -
2020 Session: Orals & Spotlights Track 21: Optimization »
Peter Richtarik · Marco Cuturi -
2020 Poster: Lower Bounds and Optimal Algorithms for Personalized Federated Learning »
Filip Hanzely · Slavomír Hanzely · Samuel Horváth · Peter Richtarik -
2020 Poster: Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization »
Dmitry Kovalev · Adil Salim · Peter Richtarik -
2019 Poster: Control What You Can: Intrinsically Motivated Task-Planning Agent »
Sebastian Blaes · Marin Vlastelica Pogančić · Jiajie Zhu · Georg Martius -
2019 Poster: A unified variance-reduced accelerated gradient method for convex optimization »
Guanghui Lan · Zhize Li · Yi Zhou -
2019 Poster: RSN: Randomized Subspace Newton »
Robert Gower · Dmitry Kovalev · Felix Lieder · Peter Richtarik -
2019 Poster: SSRGD: Simple Stochastic Recursive Gradient Descent for Escaping Saddle Points »
Zhize Li -
2019 Poster: Maximum Mean Discrepancy Gradient Flow »
Michael Arbel · Anna Korba · Adil Salim · Arthur Gretton -
2019 Poster: On the Power and Limitations of Random Features for Understanding Neural Networks »
Gilad Yehudai · Ohad Shamir -
2019 Poster: Stochastic Proximal Langevin Algorithm: Potential Splitting and Nonasymptotic Rates »
Adil Salim · Dmitry Kovalev · Peter Richtarik -
2019 Spotlight: Stochastic Proximal Langevin Algorithm: Potential Splitting and Nonasymptotic Rates »
Adil Salim · Dmitry Kovalev · Peter Richtarik -
2018 Poster: Stochastic Spectral and Conjugate Descent Methods »
Dmitry Kovalev · Peter Richtarik · Eduard Gorbunov · Elnur Gasanov -
2018 Poster: Are ResNets Provably Better than Linear Predictors? »
Ohad Shamir -
2018 Poster: Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization »
Robert Gower · Filip Hanzely · Peter Richtarik · Sebastian Stich -
2018 Poster: SEGA: Variance Reduction via Gradient Sketching »
Filip Hanzely · Konstantin Mishchenko · Peter Richtarik -
2018 Poster: Inexact trust-region algorithms on Riemannian manifolds »
Hiroyuki Kasai · Bamdev Mishra -
2018 Poster: Global Non-convex Optimization with Discretized Diffusions »
Murat Erdogdu · Lester Mackey · Ohad Shamir -
2018 Poster: A Dual Framework for Low-rank Tensor Completion »
Madhav Nimishakavi · Pratik Kumar Jawanpuria · Bamdev Mishra -
2016 Poster: Dimension-Free Iteration Complexity of Finite Sum Optimization Problems »
Yossi Arjevani · Ohad Shamir -
2016 Poster: Without-Replacement Sampling for Stochastic Gradient Methods »
Ohad Shamir -
2016 Oral: Without-Replacement Sampling for Stochastic Gradient Methods »
Ohad Shamir -
2015 Poster: Quartz: Randomized Dual Coordinate Ascent with Arbitrary Sampling »
Zheng Qu · Peter Richtarik · Tong Zhang -
2015 Poster: Efficient Output Kernel Learning for Multiple Tasks »
Pratik Kumar Jawanpuria · Maksim Lapin · Matthias Hein · Bernt Schiele -
2015 Poster: Communication Complexity of Distributed Convex Learning and Optimization »
Yossi Arjevani · Ohad Shamir -
2014 Poster: Fundamental Limits of Online and Distributed Algorithms for Statistical Learning and Estimation »
Ohad Shamir -
2014 Poster: On the Computational Efficiency of Training Neural Networks »
Roi Livni · Shai Shalev-Shwartz · Ohad Shamir -
2013 Poster: Online Learning with Switching Costs and Other Adaptive Adversaries »
Nicolò Cesa-Bianchi · Ofer Dekel · Ohad Shamir