Timezone: »

 
Learning with Strange Gradients, Martin Jaggi
Martin Jaggi

Mon Dec 13 04:30 AM -- 04:55 AM (PST) @

Abstract: Gradient methods form the foundation of current machine learning. A vast literature covers the use of stochastic gradients being simple unbiased estimators of the full gradient of our objective. In this talk, we discuss four applications motivated from practical machine learning, where this key assumption is violated, and show new ways to cope with gradients which are only loosely related to the original objective. We demonstrate that algorithms with rigorous convergence guarantees can still be obtained in such settings, for

  1. federated learning on heterogeneous data,

  2. personalized collaborative learning,

  3. masked training of neural networks with partial gradients,

  4. learning with malicious participants, in the sense of Byzantine robust training.

Author Information

Martin Jaggi (EPFL)

More from the Same Authors