Timezone: »
The increasing computational requirements of deep neural networks (DNNs) have led to significant interest in obtaining DNN models that are sparse, yet accurate. Recent work has investigated the even harder case of sparse training, where the DNN weights are, for as much as possible, already sparse to reduce computational costs during training. Existing sparse training methods are often empirical and can have lower accuracy relative to the dense baseline. In this paper, we present a general approach called Alternating Compressed/DeCompressed (AC/DC) training of DNNs, demonstrate convergence for a variant of the algorithm, and show that AC/DC outperforms existing sparse training methods in accuracy at similar computational budgets; at high sparsity levels, AC/DC even outperforms existing methods that rely on accurate pre-trained dense models. An important property of AC/DC is that it allows co-training of dense and sparse models, yielding accurate sparse-dense model pairs at the end of the training process. This is useful in practice, where compressed variants may be desirable for deployment in resource-constrained settings without re-doing the entire training flow, and also provides us with insights into the accuracy gap between dense and compressed models.
Author Information
Alexandra Peste (IST Austria)
Eugenia Iofinova (Institute of Science and Technology Austria)
Adrian Vladu (IRIF)
Dan Alistarh (IST Austria & NeuralMagic)
More from the Same Authors
-
2021 : SSSE: Efficiently Erasing Samples from Trained Machine Learning Models »
Alexandra Peste · Dan Alistarh · Christoph Lampert -
2023 Poster: Knowledge Distillation Performs Partial Variance Reduction »
Mher Safaryan · Alexandra Peste · Dan Alistarh -
2022 Poster: Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning »
Elias Frantar · Dan Alistarh -
2021 Poster: M-FAC: Efficient Matrix-Free Approximations of Second-Order Information »
Elias Frantar · Eldar Kurtic · Dan Alistarh -
2021 Poster: Distributed Principal Component Analysis with Limited Communication »
Foivos Alimisis · Peter Davies · Bart Vandereycken · Dan Alistarh -
2021 Poster: Towards Tight Communication Lower Bounds for Distributed Optimisation »
Janne H. Korhonen · Dan Alistarh -
2021 Poster: Asynchronous Decentralized SGD with Quantized and Local Updates »
Giorgi Nadiradze · Amirmojtaba Sabour · Peter Davies · Shigang Li · Dan Alistarh -
2019 Poster: Powerset Convolutional Neural Networks »
Chris Wendler · Markus Püschel · Dan Alistarh -
2017 Poster: QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding »
Dan Alistarh · Demjan Grubic · Jerry Li · Ryota Tomioka · Milan Vojnovic -
2017 Spotlight: Communication-Efficient Stochastic Gradient Descent, with Applications to Neural Networks »
Dan Alistarh · Demjan Grubic · Jerry Li · Ryota Tomioka · Milan Vojnovic