Timezone: »
Federated Learning (FL) is a powerful technique to train a model on a server with data from several clients in a privacy-preserving manner. FL incurs significant communication costs because it repeatedly transmits the model between the server and clients. Recently proposed algorithms quantize the model parameters to efficiently compress FL communication. We find that dynamic adaptations of the quantization level can boost compression without sacrificing model quality. We introduce DAdaQuant as a doubly-adaptive quantization algorithm that dynamically changes the quantization level across time and different clients. Our experiments show that DAdaQuant consistently improves client-->server compression, outperforming the strongest non-adaptive baselines by up to 2.8x.
Author Information
Robert Hönig (ETH Zürich)
Yiren Zhao (University of Cambridge)
Robert Mullins (University of Cambridge)
More from the Same Authors
-
2022 : Dynamic Head Pruning in Transformers »
Prisha Satwani · yiren zhao · Vidhi Lalchand · Robert Mullins -
2022 : Revisiting Graph Neural Network Embeddings »
Skye Purchase · yiren zhao · Robert Mullins -
2022 : Wide Attention Is The Way Forward For Transformers »
Jason Brown · Yiren Zhao · I Shumailov · Robert Mullins -
2022 : DARTFormer: Finding The Best Type Of Attention »
Jason Brown · Yiren Zhao · I Shumailov · Robert Mullins -
2022 : Wide Attention Is The Way Forward For Transformers »
Jason Brown · Yiren Zhao · I Shumailov · Robert Mullins -
2022 Poster: Rapid Model Architecture Adaption for Meta-Learning »
Yiren Zhao · Xitong Gao · I Shumailov · Nicolo Fusi · Robert Mullins -
2021 : Poster Session 1 (gather.town) »
Hamed Jalali · Robert Hönig · Maximus Mutschler · Manuel Madeira · Abdurakhmon Sadiev · Egor Shulgin · Alasdair Paren · Pascal Esser · Simon Roburin · Julius Kunze · Agnieszka Słowik · Frederik Benzing · Futong Liu · Hongyi Li · Ryotaro Mitsuboshi · Grigory Malinovsky · Jayadev Naram · Zhize Li · Igor Sokolov · Sharan Vaswani -
2021 Poster: Manipulating SGD with Data Ordering Attacks »
I Shumailov · Zakhar Shumaylov · Dmitry Kazhdan · Yiren Zhao · Nicolas Papernot · Murat Erdogdu · Ross J Anderson -
2019 Poster: Focused Quantization for Sparse CNNs »
Yiren Zhao · Xitong Gao · Daniel Bates · Robert Mullins · Cheng-Zhong Xu