Timezone: »
To improve the resilience of distributed training to worst-case, or Byzantine node failures, several recent methods have replaced gradient averaging with robust aggregation methods. Such techniques can have high computational costs, often quadratic in the number of compute nodes, and only have limited robustness guarantees. Other methods have instead used redundancy to guarantee robustness, but can only tolerate limited numbers of Byzantine failures. In this work, we present DETOX, a Byzantine-resilient distributed training framework that combines algorithmic redundancy with robust aggregation. DETOX operates in two steps, a filtering step that uses limited redundancy to significantly reduce the effect of Byzantine nodes, and a hierarchical aggregation step that can be used in tandem with any state-of-the-art robust aggregation method. We show theoretically that this leads to a substantial increase in robustness, and has a per iteration runtime that can be nearly linear in the number of compute nodes. We provide extensive experiments over real distributed setups across a variety of large-scale machine learning tasks, showing that DETOX leads to orders of magnitude accuracy and speedup improvements over many state-of-the-art Byzantine-resilient approaches.
Author Information
Shashank Rajput (University of Wisconsin - Madison)
Hongyi Wang (University of Wisconsin-Madison)
Zachary Charles (Google Research)
Dimitris Papailiopoulos (University of Wisconsin-Madison)
More from the Same Authors
-
2021 : Iterated Vector Fields and Conservatism, with Applications to Federated Learning »
Zachary Charles · John Rush -
2022 : Active Learning is a Strong Baseline for Data Subset Selection »
Dongmin Park · Dimitris Papailiopoulos · Kangwook Lee -
2022 : A Better Way to Decay: Proximal Gradient Training Algorithms for Neural Nets »
Liu Yang · Jifan Zhang · Joseph Shenouda · Dimitris Papailiopoulos · Kangwook Lee · Robert Nowak -
2022 : Motley: Benchmarking Heterogeneity and Personalization in Federated Learning »
Shanshan Wu · Tian Li · Zachary Charles · Yu Xiao · Ken Liu · Zheng Xu · Virginia Smith -
2023 Poster: Dissecting Chain-of-Thought: A Study on Compositional In-Context Learning of MLPs »
Yingcong Li · Kartik Sreenivasan · Angeliki Giannou · Dimitris Papailiopoulos · Samet Oymak -
2023 Poster: Gradient Descent with Linearly Correlated Noise: Theory and Applications to Differential Privacy »
Anastasiia Koloskova · Ryan McKenna · Zachary Charles · John Rush · H. Brendan McMahan -
2023 Poster: Towards Federated Foundation Models: Scalable Dataset Pipelines for Group-Structured Learning »
Zachary Charles · Nicole Mitchell · Krishna Pillutla · Michael Reneer · Zachary Garrett -
2022 Poster: LIFT: Language-Interfaced Fine-Tuning for Non-language Machine Learning Tasks »
Tuan Dinh · Yuchen Zeng · Ruisu Zhang · Ziqian Lin · Michael Gira · Shashank Rajput · Jy-yong Sohn · Dimitris Papailiopoulos · Kangwook Lee -
2022 Poster: Rare Gems: Finding Lottery Tickets at Initialization »
Kartik Sreenivasan · Jy-yong Sohn · Liu Yang · Matthew Grinde · Alliot Nagle · Hongyi Wang · Eric Xing · Kangwook Lee · Dimitris Papailiopoulos -
2021 Poster: An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks »
Shashank Rajput · Kartik Sreenivasan · Dimitris Papailiopoulos · Amin Karbasi -
2021 Poster: On Large-Cohort Training for Federated Learning »
Zachary Charles · Zachary Garrett · Zhouyuan Huo · Sergei Shmulyian · Virginia Smith -
2020 Poster: Bad Global Minima Exist and SGD Can Reach Them »
Shengchao Liu · Dimitris Papailiopoulos · Dimitris Achlioptas -
2020 Poster: Attack of the Tails: Yes, You Really Can Backdoor Federated Learning »
Hongyi Wang · Kartik Sreenivasan · Shashank Rajput · Harit Vishwakarma · Saurabh Agarwal · Jy-yong Sohn · Kangwook Lee · Dimitris Papailiopoulos -
2020 Poster: Optimal Lottery Tickets via Subset Sum: Logarithmic Over-Parameterization is Sufficient »
Ankit Pensia · Shashank Rajput · Alliot Nagle · Harit Vishwakarma · Dimitris Papailiopoulos -
2020 Spotlight: Optimal Lottery Tickets via Subset Sum: Logarithmic Over-Parameterization is Sufficient »
Ankit Pensia · Shashank Rajput · Alliot Nagle · Harit Vishwakarma · Dimitris Papailiopoulos -
2018 Poster: The Effect of Network Width on the Performance of Large-batch Training »
Lingjiao Chen · Hongyi Wang · Jinman Zhao · Dimitris Papailiopoulos · Paraschos Koutris -
2018 Poster: ATOMO: Communication-efficient Learning via Atomic Sparsification »
Hongyi Wang · Scott Sievert · Shengchao Liu · Zachary Charles · Dimitris Papailiopoulos · Stephen Wright -
2016 Poster: Cyclades: Conflict-free Asynchronous Machine Learning »
Xinghao Pan · Maximilian Lam · Stephen Tu · Dimitris Papailiopoulos · Ce Zhang · Michael Jordan · Kannan Ramchandran · Christopher RĂ© · Benjamin Recht -
2015 Poster: Orthogonal NMF through Subspace Exploration »
Megasthenis Asteris · Dimitris Papailiopoulos · Alex Dimakis -
2015 Poster: Sparse PCA via Bipartite Matchings »
Megasthenis Asteris · Dimitris Papailiopoulos · Anastasios Kyrillidis · Alex Dimakis -
2015 Poster: Parallel Correlation Clustering on Big Graphs »
Xinghao Pan · Dimitris Papailiopoulos · Samet Oymak · Benjamin Recht · Kannan Ramchandran · Michael Jordan