Skip to yearly menu bar Skip to main content


Plenary Speaker
in
Workshop: OPT 2021: Optimization for Machine Learning

Learning with Strange Gradients, Martin Jaggi

Martin Jaggi


Abstract:

Abstract: Gradient methods form the foundation of current machine learning. A vast literature covers the use of stochastic gradients being simple unbiased estimators of the full gradient of our objective. In this talk, we discuss four applications motivated from practical machine learning, where this key assumption is violated, and show new ways to cope with gradients which are only loosely related to the original objective. We demonstrate that algorithms with rigorous convergence guarantees can still be obtained in such settings, for

  1. federated learning on heterogeneous data,

  2. personalized collaborative learning,

  3. masked training of neural networks with partial gradients,

  4. learning with malicious participants, in the sense of Byzantine robust training.