Timezone: »
Federated learning data is drawn from a distribution of distributions: clients are drawn from a meta-distribution, and their data are drawn from personal data distributions. Thus generalization studies in federated learning should separate performance gaps from unseen client data (out-of-sample gap) from performance gaps from unseen client distributions (participation gap). In this work, we propose a framework for disentangling these performance gaps. Using this framework we observe and explain differences in behavior across natural and synthetic federated datasets, indicating that dataset synthesis strategy can be important for realistic simulations of generalization in federated learning. We propose a semantic synthesis strategy that enables realistic simulation without naturally-partitioned data.
Author Information
Honglin Yuan (Stanford University)
Warren Morningstar (Google)
I am an AI resident at google, studying how to model uncertainty in Neural Networks. Before Google, I was an astrophysicist at the Kavli Institute for Particle Astrophysics and Cosmology at Stanford University, working on statistical modeling and machine learning applied to astronomical observations.
Lin Ning (Google Research)
More from the Same Authors
-
2021 : Learning Federated Representations and Recommendations with Limited Negatives »
Lin Ning · Sushant Prakash -
2021 : Sharp Bounds for FedAvg (Local SGD) »
Margalit Glasgow · Honglin Yuan · Tengyu Ma -
2021 : PAC^m-Bayes: Narrowing the Empirical Risk Gap in the Misspecified Bayesian Regime »
Joshua Dillon · Warren Morningstar · Alexander Alemi -
2021 : Contributed Talk 4: Sharp Bounds for FedAvg (Local SGD) »
Margalit Glasgow · Honglin Yuan · Tengyu Ma -
2020 Poster: Federated Accelerated Stochastic Gradient Descent »
Honglin Yuan · Tengyu Ma