Skip to yearly menu bar Skip to main content


One-Pass Distribution Sketch for Measuring Data Heterogeneity in Federated Learning

Zichang Liu · Zhaozhuo Xu · Benjamin Coleman · Anshumali Shrivastava

Great Hall & Hall B1+B2 (level 1) #539
[ ]
Tue 12 Dec 8:45 a.m. PST — 10:45 a.m. PST


Federated learning (FL) is a machine learning paradigm where multiple client devices train models collaboratively without data exchange. Data heterogeneity problem is naturally inherited in FL since data in different clients follow diverse distributions. To mitigate the negative influence of data heterogeneity, we need to start by measuring it across clients. However, the efficient measurement between distributions is a challenging problem, especially in high dimensionality. In this paper, we propose a one-pass distribution sketch to represent the client data distribution. Our sketching algorithm only requires a single pass of the client data, which is efficient in terms of time and memory. Moreover, we show in both theory and practice that the distance between two distribution sketches represents the divergence between their corresponding distributions. Furthermore, we demonstrate with extensive experiments that our distribution sketch improves the client selection in the FL training. We also showcase that our distribution sketch is an efficient solution to the cold start problem in FL for new clients with unlabeled data.

Chat is not available.