Timezone: »
Scaling up the convolutional neural network (CNN) size (e.g., width, depth, etc.) is known to effectively improve model accuracy. However, the large model size impedes training on resource-constrained edge devices. For instance, federated learning (FL) may place undue burden on the compute capability of edge nodes, even though there is a strong practical need for FL due to its privacy and confidentiality properties. To address the resource-constrained reality of edge devices, we reformulate FL as a group knowledge transfer training algorithm, called FedGKT. FedGKT designs a variant of the alternating minimization approach to train small CNNs on edge nodes and periodically transfer their knowledge by knowledge distillation to a large server-side CNN. FedGKT consolidates several advantages into a single framework: reduced demand for edge computation, lower communication bandwidth for large CNNs, and asynchronous training, all while maintaining model accuracy comparable to FedAvg. We train CNNs designed based on ResNet-56 and ResNet-110 using three distinct datasets (CIFAR-10, CIFAR-100, and CINIC-10) and their non-IID variants. Our results show that FedGKT can obtain comparable or even slightly higher accuracy than FedAvg. More importantly, FedGKT makes edge training affordable. Compared to the edge training using FedAvg, FedGKT demands 9 to 17 times less computational power (FLOPs) on edge devices and requires 54 to 105 times fewer parameters in the edge CNN. Our source code is released at FedML (https://fedml.ai).
Author Information
Chaoyang He (University of Southern California)
Murali Annavaram (University of Southern California)
Salman Avestimehr (University of Southern California)
More from the Same Authors
-
2020 : On Polynomial Approximations for Privacy-Preserving and Verifiable ReLU Networks »
Salman Avestimehr -
2021 Spotlight: MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge »
Geng Yuan · Xiaolong Ma · Wei Niu · Zhengang Li · Zhenglun Kong · Ning Liu · Yifan Gong · Zheng Zhan · Chaoyang He · Qing Jin · Siyue Wang · Minghai Qin · Bin Ren · Yanzhi Wang · Sijia Liu · Xue Lin -
2021 : Characterizing and Improving MPC-based Private Inference for Transformer-based Models »
Yongqin Wang · Brian Knott · Murali Annavaram · Hsien-Hsin Lee -
2021 : Basil: A Fast and Byzantine-Resilient Approach for Decentralized Training »
Ahmed Elkordy · Saurav Prakash · Salman Avestimehr -
2021 : Secure Aggregation for Buffered Asynchronous Federated Learning »
Jinhyun So · Ramy Ali · Basak Guler · Salman Avestimehr -
2021 : FairFed: Enabling Group Fairness in Federated Learning »
Yahya Ezzeldin · Shen Yan · Chaoyang He · Emilio Ferrara · Salman Avestimehr -
2022 : Federated Learning of Large Models at the Edge via Principal Sub-Model Training »
Yue Niu · Saurav Prakash · Souvik Kundu · Sunwoo Lee · Salman Avestimehr -
2022 : Federated Sparse Training: Lottery Aware Model Compression for Resource Constrained Edge »
Sara Babakniya · Souvik Kundu · Saurav Prakash · Yue Niu · Salman Avestimehr -
2022 : Private Data Leakage via Exploiting Access Patterns of Sparse Features in Deep Learning-based Recommendation Systems »
Hanieh Hashemi · Wenjie Xiong · Liu Ke · Kiwan Maeng · Murali Annavaram · G. Edward Suh · Hsien-Hsin Lee -
2022 : pFLSynth: Personalized Federated Learning of Image Synthesis in Multi-Contrast MRI »
Onat Dalmaz · Muhammad U Mirza · Gökberk Elmas · Muzaffer Özbey · Salman Ul Hassan Dar · Emir Ceyani · Salman Avestimehr · Tolga Cukur -
2022 : pFLSynth: Personalized Federated Learning of Image Synthesis in Multi-Contrast MRI »
Onat Dalmaz · Muhammad U Mirza · Gökberk Elmas · Muzaffer Özbey · Salman Ul Hassan Dar · Emir Ceyani · Salman Avestimehr · Tolga Cukur -
2022 Spotlight: Self-Aware Personalized Federated Learning »
Huili Chen · Jie Ding · Eric W. Tramel · Shuang Wu · Anit Kumar Sahu · Salman Avestimehr · Tao Zhang -
2022 : LightVeriFL: Lightweight and Verifiable Secure Federated Learning »
Baturalp Buyukates · Jinhyun So · Hessam Mahdavifar · Salman Avestimehr -
2022 Poster: Self-Aware Personalized Federated Learning »
Huili Chen · Jie Ding · Eric W. Tramel · Shuang Wu · Anit Kumar Sahu · Salman Avestimehr · Tao Zhang -
2022 Poster: FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in Realistic Healthcare Settings »
Jean Ogier du Terrail · Samy-Safwan Ayed · Edwige Cyffers · Felix Grimberg · Chaoyang He · Regis Loeb · Paul Mangold · Tanguy Marchand · Othmane Marfoq · Erum Mushtaq · Boris Muzellec · Constantin Philippenko · Santiago Silva · Maria Teleńczuk · Shadi Albarqouni · Salman Avestimehr · Aurélien Bellet · Aymeric Dieuleveut · Martin Jaggi · Sai Praneeth Karimireddy · Marco Lorenzi · Giovanni Neglia · Marc Tommasi · Mathieu Andreux -
2021 Poster: MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge »
Geng Yuan · Xiaolong Ma · Wei Niu · Zhengang Li · Zhenglun Kong · Ning Liu · Yifan Gong · Zheng Zhan · Chaoyang He · Qing Jin · Siyue Wang · Minghai Qin · Bin Ren · Yanzhi Wang · Sijia Liu · Xue Lin -
2020 Poster: A Scalable Approach for Privacy-Preserving Collaborative Machine Learning »
Jinhyun So · Basak Guler · Salman Avestimehr -
2020 Poster: Minimax Lower Bounds for Transfer Learning with Linear and One-hidden Layer Neural Networks »
Mohammadreza Mousavi Kalan · Zalan Fabian · Salman Avestimehr · Mahdi Soltanolkotabi -
2018 Poster: Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training »
Youjie Li · Mingchao Yu · Songze Li · Salman Avestimehr · Nam Sung Kim · Alex Schwing -
2018 Poster: GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training »
Mingchao Yu · Zhifeng Lin · Krishna Narra · Songze Li · Youjie Li · Nam Sung Kim · Alex Schwing · Murali Annavaram · Salman Avestimehr -
2017 Poster: Polynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication »
Qian Yu · Mohammad Maddah-Ali · Salman Avestimehr