In this paper, we study communication efficient distributed algorithms for distributionally robust federated learning via periodic averaging with adaptive sampling. In contrast to standard empirical risk minimization, due to the minimax structure of the underlying optimization problem, a key difficulty arises from the fact that the global parameter that controls the mixture of local losses can only be updated infrequently on the global stage. To compensate for this, we propose a Distributionally Robust Federated Averaging (DRFA) algorithm that employs a novel snapshotting scheme to approximate the accumulation of history gradients of the mixing parameter. We analyze the convergence rate of DRFA in both convex-linear and nonconvex-linear settings. We also generalize the proposed idea to objectives with regularization on the mixture parameter and propose a proximal variant, dubbed as DRFA-Prox, with provable convergence rates. We also analyze an alternative optimization method for regularized case in strongly-convex-strongly-concave and non-convex (under PL condition)-strongly-concave settings. To the best of our knowledge, this paper is the first to solve distributionally robust federated learning with reduced communication, and to analyze the efficiency of local descent methods on distributed minimax problems. We give corroborating experimental evidence for our theoretical results in federated learning settings.
Yuyang Deng (Penn State)
Mohammad Mahdi Kamani (The Pennsylvania State University)
Mehrdad Mahdavi (Pennsylvania State University)
Mehrdad Mahdavi is an Assistant Professor of Computer Science & Engineering at Pennsylvania State University. He runs the Machine Learning and Optimization Lab, where they work on fundamental problems in computational and theoretical machine learning.
More from the Same Authors
2022 : FedRule: Federated Rule Recommendation System with Graph Neural Networks »
Yuhang Yao · Mohammad Mahdi Kamani · Zhongwei Cheng · Lin Chen · Carlee Joe-Wong · Tianqiang Liu
2022 Poster: Tight Analysis of Extra-gradient and Optimistic Gradient Methods For Nonconvex Minimax Problems »
Pouria Mahdavinia · Yuyang Deng · Haochuan Li · Mehrdad Mahdavi
2020 Poster: Online Structured Meta-learning »
Huaxiu Yao · Yingbo Zhou · Mehrdad Mahdavi · Zhenhui (Jessie) Li · Richard Socher · Caiming Xiong
2020 Poster: GCN meets GPU: Decoupling “When to Sample” from “How to Sample” »
Morteza Ramezani · Weilin Cong · Mehrdad Mahdavi · Anand Sivasubramaniam · Mahmut Kandemir
2019 Poster: Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization »
Farzin Haddadpour · Mohammad Mahdi Kamani · Mehrdad Mahdavi · Viveck Cadambe