Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Federated Learning: Recent Advances and New Challenges

Unbounded Gradients in Federated Leaning with Buffered Asynchronous Aggregation

M. Taha Toghani · Cesar Uribe


Abstract:

Synchronous updates may compromise the efficiency of cross-silo federated learning once the number of active clients increases. The FedBuff algorithm (Nguyen et al.) alleviates this problem by allowing asynchronous updates (staleness), which enhances the scalability of training while preserving privacy via secure aggregation. We revisit the FedBuff algorithm for asynchronous federated learning and extend the existing analysis by removing the boundedness assumptions from the gradient norm. This paper presents a theoretical analysis of the convergence rate of this algorithm when heterogeneity in data, batch size, and delay are considered.

Chat is not available.