Timezone: »
Large-scale distributed training of Deep Neural Networks (DNNs) on state-of-the-art platforms are expected to be severely communication constrained. To overcome this limitation, numerous gradient compression techniques have been proposed and have demonstrated high compression ratios. However, most existing compression methods do not scale well to large scale distributed systems (due to gradient build-up) and / or lack evaluations in large datasets. To mitigate these issues, we propose a new compression technique, Scalable Sparsified Gradient Compression (ScaleComp), that (i) leverages similarity in the gradient distribution amongst learners to provide a commutative compressor and keep communication cost constant to worker number and (ii) includes low-pass filter in local gradient accumulations to mitigate the impacts of large batch size training and significantly improve scalability. Using theoretical analysis, we show that ScaleComp provides favorable convergence guarantees and is compatible with gradient all-reduce techniques. Furthermore, we experimentally demonstrate that ScaleComp has small overheads, directly reduces gradient traffic and provides high compression rates (70-150X) and excellent scalability (up to 64-80 learners and 10X larger batch sizes over normal training) across a wide range of applications (image, language, and speech) without significant accuracy loss.
Author Information
Chia-Yu Chen (IBM research)
my research areas focus on: accelerator architecture compiler design and library development machine learning and neural network VLSI and nano device
Jiamin Ni (IBM)
Songtao Lu (IBM Research)
Xiaodong Cui (IBM T. J. Watson Research Center)
Pin-Yu Chen (IBM Research AI)
Xiao Sun (IBM Thomas J. Watson Research Center)
Naigang Wang (IBM T. J. Watson Research Center)
Swagath Venkataramani (IBM Research)
Vijayalakshmi (Viji) Srinivasan (IBM TJ Watson)
Wei Zhang (IBM T.J.Watson Research Center)
BE Beijing Univ of Technology 2005 MSc Technical University of Denmark 2008 PhD University of Wisconsin, Madison 2013 All in computer science Published papers in ASPLOS, OOPSLA, OSDI, PLDI, IJCAI, ICDM, NIPS
Kailash Gopalakrishnan (IBM Research)
More from the Same Authors
-
2020 : Paper 10: Certified Interpretability Robustness for Class Activation Mapping »
Alex Gu · Tsui-Wei Weng · Pin-Yu Chen · Sijia Liu · Luca Daniel -
2021 : CodeNet: A Large-Scale AI for Code Dataset for Learning a Diversity of Coding Tasks »
Ruchir Puri · David Kung · Geert Janssen · Wei Zhang · Giacomo Domeniconi · Vladimir Zolotov · Julian T Dolby · Jie Chen · Mihir Choudhury · Lindsey Decker · Veronika Thost · Luca Buratti · Saurabh Pujar · Shyam Ramji · Ulrich Finkler · Susan Malaika · Frederick Reiss -
2021 : Certified Robustness for Free in Differentially Private Federated Learning »
Chulin Xie · Yunhui Long · Pin-Yu Chen · Krishnaram Kenthapadi · Bo Li -
2021 : MAML is a Noisy Contrastive Learner »
Chia-Hsiang Kao · Wei-Chen Chiu · Pin-Yu Chen -
2021 : QTN-VQC: An End-to-End Learning Framework for Quantum Neural Networks »
Jun Qi · Huck Yang · Pin-Yu Chen -
2021 : Pessimistic Model Selection for Offline Deep Reinforcement Learning »
Huck Yang · Yifan Cui · Pin-Yu Chen -
2022 : SCERL: A Benchmark for intersecting language and safe reinforcement learning »
Lan Hoang · Shivam Ratnakar · Nicolas Galichet · Akifumi Wachi · Keerthiram Murugesan · Songtao Lu · Mattia Atzeni · Michael Katz · Subhajit Chaudhury -
2022 : Visual Prompting for Adversarial Robustness »
Aochuan Chen · Peter Lorenz · Yuguang Yao · Pin-Yu Chen · Sijia Liu -
2022 : Do Domain Generalization Methods Generalize Well? »
Akshay Mehra · Bhavya Kailkhura · Pin-Yu Chen · Jihun Hamm -
2022 : On the Adversarial Robustness of Vision Transformers »
Rulin Shao · Zhouxing Shi · Jinfeng Yi · Pin-Yu Chen · Cho-Jui Hsieh -
2022 : Panel »
Pin-Yu Chen · Alex Gittens · Bo Li · Celia Cintas · Hilde Kuehne · Payel Das -
2022 : Q & A »
Sayak Paul · Sijia Liu · Pin-Yu Chen -
2022 : Deep dive on foundation models for computer vision »
Pin-Yu Chen -
2022 Tutorial: Foundational Robustness of Foundation Models »
Pin-Yu Chen · Sijia Liu · Sayak Paul -
2022 : Basics in foundation model and robustness »
Pin-Yu Chen · Sijia Liu -
2022 : Conditional Moment Alignment for Improved Generalization in Federated Learning »
Jayanth Reddy Regatti · Songtao Lu · Abhishek Gupta · Ness Shroff -
2022 Poster: A Stochastic Linearized Augmented Lagrangian Method for Decentralized Bilevel Optimization »
Songtao Lu · Siliang Zeng · Xiaodong Cui · Mark Squillante · Lior Horesh · Brian Kingsbury · Jia Liu · Mingyi Hong -
2022 Poster: Robustness to Unbounded Smoothness of Generalized SignSGD »
Michael Crawshaw · Mingrui Liu · Francesco Orabona · Wei Zhang · Zhenxun Zhuang -
2022 Poster: Deep Compression of Pre-trained Transformer Models »
Naigang Wang · Chi-Chun (Charlie) Liu · Swagath Venkataramani · Sanchari Sen · Chia-Yu Chen · Kaoutar El Maghraoui · Vijayalakshmi (Viji) Srinivasan · Leland Chang -
2022 Poster: Understanding Benign Overfitting in Gradient-Based Meta Learning »
Lisha Chen · Songtao Lu · Tianyi Chen -
2021 Poster: Predicting Deep Neural Network Generalization with Perturbation Response Curves »
Yair Schiff · Brian Quanz · Payel Das · Pin-Yu Chen -
2021 Poster: Mean-based Best Arm Identification in Stochastic Bandits under Reward Contamination »
Arpan Mukherjee · Ali Tajer · Pin-Yu Chen · Payel Das -
2021 Poster: Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks »
Shuai Zhang · Meng Wang · Sijia Liu · Pin-Yu Chen · Jinjun Xiong -
2021 Poster: CAFE: Catastrophic Data Leakage in Vertical Federated Learning »
Xiao Jin · Pin-Yu Chen · Chia-Yi Hsu · Chia-Mu Yu · Tianyi Chen -
2021 Poster: Taming Communication and Sample Complexities in Decentralized Policy Evaluation for Cooperative Multi-Agent Reinforcement Learning »
Xin Zhang · Zhuqing Liu · Jia Liu · Zhengyuan Zhu · Songtao Lu -
2021 Poster: Adversarial Attack Generation Empowered by Min-Max Optimization »
Jingkang Wang · Tianyun Zhang · Sijia Liu · Pin-Yu Chen · Jiacen Xu · Makan Fardad · Bo Li -
2021 : Live Q&A session: MAML is a Noisy Contrastive Learner »
Chia-Hsiang Kao · Wei-Chen Chiu · Pin-Yu Chen -
2021 : Contributed Talk (Oral): MAML is a Noisy Contrastive Learner »
Chia-Hsiang Kao · Wei-Chen Chiu · Pin-Yu Chen -
2021 : SenSE: A Toolkit for Semantic Change Exploration via Word Embedding Alignment »
Maurício Gruppi · Sibel Adali · Pin-Yu Chen -
2021 Poster: When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning? »
Lijie Fan · Sijia Liu · Pin-Yu Chen · Gaoyuan Zhang · Chuang Gan -
2021 Poster: Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations »
Yu-Lin Tsai · Chia-Yi Hsu · Chia-Mu Yu · Pin-Yu Chen -
2021 Poster: Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning »
Akshay Mehra · Bhavya Kailkhura · Pin-Yu Chen · Jihun Hamm -
2020 Poster: A Decentralized Parallel Algorithm for Training Generative Adversarial Nets »
Mingrui Liu · Wei Zhang · Youssef Mroueh · Xiaodong Cui · Jarret Ross · Tianbao Yang · Payel Das -
2020 Poster: Ultra-Low Precision 4-bit Training of Deep Neural Networks »
Xiao Sun · Naigang Wang · Chia-Yu Chen · Jiamin Ni · Ankur Agrawal · Xiaodong Cui · Swagath Venkataramani · Kaoutar El Maghraoui · Vijayalakshmi (Viji) Srinivasan · Kailash Gopalakrishnan -
2020 Poster: Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems »
Songtao Lu · Meisam Razaviyayn · Bo Yang · Kejun Huang · Mingyi Hong -
2020 Spotlight: Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems »
Songtao Lu · Meisam Razaviyayn · Bo Yang · Kejun Huang · Mingyi Hong -
2020 Oral: Ultra-Low Precision 4-bit Training of Deep Neural Networks »
Xiao Sun · Naigang Wang · Chia-Yu Chen · Jiamin Ni · Ankur Agrawal · Xiaodong Cui · Swagath Venkataramani · Kaoutar El Maghraoui · Vijayalakshmi (Viji) Srinivasan · Kailash Gopalakrishnan -
2020 Poster: FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training »
Yonggan Fu · Haoran You · Yang Zhao · Yue Wang · Chaojian Li · Kailash Gopalakrishnan · Zhangyang Wang · Yingyan Lin -
2020 Poster: Higher-Order Certification For Randomized Smoothing »
Jeet Mohapatra · Ching-Yun Ko · Tsui-Wei Weng · Pin-Yu Chen · Sijia Liu · Luca Daniel -
2020 Poster: Optimizing Mode Connectivity via Neuron Alignment »
Norman J Tatro · Pin-Yu Chen · Payel Das · Igor Melnyk · Prasanna Sattigeri · Rongjie Lai -
2020 Spotlight: Higher-Order Certification For Randomized Smoothing »
Jeet Mohapatra · Ching-Yun Ko · Tsui-Wei Weng · Pin-Yu Chen · Sijia Liu · Luca Daniel -
2020 Poster: Decentralized TD Tracking with Linear Function Approximation and its Finite-Time Analysis »
Gang Wang · Songtao Lu · Georgios Giannakis · Gerald Tesauro · Jian Sun -
2019 : Poster Session »
Ahana Ghosh · Javad Shafiee · Akhilan Boopathy · Alex Tamkin · Theodoros Vasiloudis · Vedant Nanda · Ali Baheri · Paul Fieguth · Andrew Bennett · Guanya Shi · Hao Liu · Arushi Jain · Jacob Tyo · Benjie Wang · Boxiao Chen · Carroll Wainwright · Chandramouli Shama Sastry · Chao Tang · Daniel S. Brown · David Inouye · David Venuto · Dhruv Ramani · Dimitrios Diochnos · Divyam Madaan · Dmitrii Krashenikov · Joel Oren · Doyup Lee · Eleanor Quint · elmira amirloo · Matteo Pirotta · Gavin Hartnett · Geoffroy Dubourg-Felonneau · Gokul Swamy · Pin-Yu Chen · Ilija Bogunovic · Jason Carter · Javier Garcia-Barcos · Jeet Mohapatra · Jesse Zhang · Jian Qian · John Martin · Oliver Richter · Federico Zaiter · Tsui-Wei Weng · Karthik Abinav Sankararaman · Kyriakos Polymenakos · Lan Hoang · mahdieh abbasi · Marco Gallieri · Mathieu Seurin · Matteo Papini · Matteo Turchetta · Matthew Sotoudeh · Mehrdad Hosseinzadeh · Nathan Fulton · Masatoshi Uehara · Niranjani Prasad · Oana-Maria Camburu · Patrik Kolaric · Philipp Renz · Prateek Jaiswal · Reazul Hasan Russel · Riashat Islam · Rishabh Agarwal · Alexander Aldrick · Sachin Vernekar · Sahin Lale · Sai Kiran Narayanaswami · Samuel Daulton · Sanjam Garg · Sebastian East · Shun Zhang · Soheil Dsidbari · Justin Goodwin · Victoria Krakovna · Wenhao Luo · Wesley Chung · Yuanyuan Shi · Yuh-Shyang Wang · Hongwei Jin · Ziping Xu -
2019 Poster: Hybrid 8-bit Floating Point (HFP8) Training and Inference for Deep Neural Networks »
Xiao Sun · Jungwook Choi · Chia-Yu Chen · Naigang Wang · Swagath Venkataramani · Vijayalakshmi (Viji) Srinivasan · Xiaodong Cui · Wei Zhang · Kailash Gopalakrishnan -
2018 Poster: Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization »
Sijia Liu · Bhavya Kailkhura · Pin-Yu Chen · Paishun Ting · Shiyu Chang · Lisa Amini -
2018 Poster: Efficient Neural Network Robustness Certification with General Activation Functions »
Huan Zhang · Tsui-Wei Weng · Pin-Yu Chen · Cho-Jui Hsieh · Luca Daniel -
2018 Poster: Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives »
Amit Dhurandhar · Pin-Yu Chen · Ronny Luss · Chun-Chen Tu · Paishun Ting · Karthikeyan Shanmugam · Payel Das -
2018 Poster: Training Deep Neural Networks with 8-bit Floating Point Numbers »
Naigang Wang · Jungwook Choi · Daniel Brand · Chia-Yu Chen · Kailash Gopalakrishnan -
2018 Poster: Evolutionary Stochastic Gradient Descent for Optimization of Deep Neural Networks »
Xiaodong Cui · Wei Zhang · Zoltán Tüske · Michael Picheny -
2017 Poster: Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent »
Xiangru Lian · Ce Zhang · Huan Zhang · Cho-Jui Hsieh · Wei Zhang · Ji Liu -
2017 Oral: Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent »
Xiangru Lian · Ce Zhang · Huan Zhang · Cho-Jui Hsieh · Wei Zhang · Ji Liu -
2017 Poster: Dilated Recurrent Neural Networks »
Shiyu Chang · Yang Zhang · Wei Han · Mo Yu · Xiaoxiao Guo · Wei Tan · Xiaodong Cui · Michael Witbrock · Mark Hasegawa-Johnson · Thomas Huang