Timezone: »
This paper advocates a new paradigm Personalized Empirical Risk Minimization (PERM) to facilitate learning from heterogeneous data sources without imposing stringent constraints on computational resources shared by participating devices. In PERM, we aim at learning a distinct model for each client by personalizing the aggregation of local empirical losses by effectively estimating the statistical discrepancy among data distributions, which entails optimal statistical accuracy for all local distributions and overcomes the data heterogeneity issue. To learn personalized models at scale, we propose a distributed algorithm that replaces the standard model averaging with model shuffling to simultaneously optimize PERM objectives for all devices. This also allows to learn distinct model architectures (e.g., neural networks with different number of parameters) for different clients, thus confining to underlying memory and compute resources of individual clients. We rigorously analyze the convergence of proposed algorithm and conduct experiments that corroborates the effectiveness of proposed paradigm.
Author Information
Yuyang Deng (Penn State)
Mohammad Mahdi Kamani (Wyze Labs)
Pouria Mahdavinia (Penn State University)
Mehrdad Mahdavi (Pennsylvania State University)
Mehrdad Mahdavi is an Assistant Professor of Computer Science & Engineering at Pennsylvania State University. He runs the Machine Learning and Optimization Lab, where they work on fundamental problems in computational and theoretical machine learning.
More from the Same Authors
-
2022 : FedRule: Federated Rule Recommendation System with Graph Neural Networks »
Yuhang Yao · Mohammad Mahdi Kamani · Zhongwei Cheng · Lin Chen · Carlee Joe-Wong · Tianqiang Liu -
2023 Poster: Understanding Deep Gradient Leakage via Inversion Influence Functions »
Haobo Zhang · Junyuan Hong · Yuyang Deng · Mehrdad Mahdavi · Jiayu Zhou -
2023 Poster: Mixture Weight Estimation and Model Prediction in Multi-source Multi-target Domain Adaptation »
Yuyang Deng · Ilja Kuzborskij · Mehrdad Mahdavi -
2023 Poster: Wyze Rule: Federated Rule Dataset for Rule Recommendation Benchmarking »
Mohammad Mahdi Kamani · Yuhang Yao · Hanjia Lyu · Zhongwei Cheng · Lin Chen · Liangju Li · Carlee Joe-Wong · Jiebo Luo -
2022 Poster: Tight Analysis of Extra-gradient and Optimistic Gradient Methods For Nonconvex Minimax Problems »
Pouria Mahdavinia · Yuyang Deng · Haochuan Li · Mehrdad Mahdavi -
2020 Poster: Online Structured Meta-learning »
Huaxiu Yao · Yingbo Zhou · Mehrdad Mahdavi · Zhenhui (Jessie) Li · Richard Socher · Caiming Xiong -
2020 Poster: GCN meets GPU: Decoupling “When to Sample” from “How to Sample” »
Morteza Ramezani · Weilin Cong · Mehrdad Mahdavi · Anand Sivasubramaniam · Mahmut Kandemir -
2020 Poster: Distributionally Robust Federated Averaging »
Yuyang Deng · Mohammad Mahdi Kamani · Mehrdad Mahdavi -
2019 Poster: Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization »
Farzin Haddadpour · Mohammad Mahdi Kamani · Mehrdad Mahdavi · Viveck Cadambe