Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Federated Learning: Recent Advances and New Challenges

SWIFT: Rapid Decentralized Federated Learning via Wait-Free Model Communication

Marco Bornstein · Tahseen Rabbani · Evan Wang · Amrit Bedi · Furong Huang


Abstract: The decentralized Federated Learning (FL) setting avoids the role of a potentially unreliable or untrustworthy central host by utilizing groups of clients to collaboratively train a model via localized training and model/gradient sharing. Most existing decentralized FL algorithms require synchronization of client models where the speed of synchronization depends upon the slowest client. In this work, we propose SWIFT: a wait-free decentralized FL algorithm that allows clients to conduct training at their own speed. Theoretically, we prove that SWIFT matches the gold-standard convergence rate $\mathcal{O}(1/\sqrt{T})$ of parallel stochastic gradient descent for convex and non-convex smooth optimization (total iterations $T$). Furthermore, this is done in the IID and non-IID settings without any bounded-delay assumption for slow clients, which is required by other asynchronous decentralized FL algorithms. Although SWIFT achieves the same convergence rate with respect to $T$ as other state-of-the-art (SOTA) parallel stochastic algorithms, it converges faster with respect to time due to its wait-free structure. Our experimental results demonstrate that communication costs between clients in SWIFT fall by an order of magnitude compared to synchronous counterparts. Furthermore, SWIFT produces loss levels for image classification, over IID and non-IID data settings, upwards of 50\% faster than existing SOTA algorithms.

Chat is not available.