Timezone: »
In recent years, great progress has been made in a variety of application domains thanks to the development of increasingly deeper neural networks. Unfortunately, the huge number of units of these networks makes them expensive both computationally and memory-wise. To overcome this, exploiting the fact that deep networks are over-parametrized, several compression strategies have been proposed. These methods, however, typically start from a network that has been trained in a standard manner, without considering such a future compression. In this paper, we propose to explicitly account for compression in the training process. To this end, we introduce a regularizer that encourages the parameter matrix of each layer to have low rank during training. We show that accounting for compression during training allows us to learn much more compact, yet at least as effective, models than state-of-the-art compression techniques.
Author Information
Jose Alvarez (TRI)
Mathieu Salzmann (EPFL)
More from the Same Authors
-
2021 : SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation »
Robin Chan · Krzysztof Lis · Svenja Uhlemeyer · Hermann Blum · Sina Honari · Roland Siegwart · Pascal Fua · Mathieu Salzmann · Matthias Rottmann -
2021 Poster: Distilling Image Classifiers in Object Detectors »
Shuxuan Guo · Jose M. Alvarez · Mathieu Salzmann -
2021 Poster: Learning Transferable Adversarial Perturbations »
Krishna kanth Nakka · Mathieu Salzmann -
2020 Poster: On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them »
Chen Liu · Mathieu Salzmann · Tao Lin · Ryota Tomioka · Sabine Süsstrunk -
2020 Poster: ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks »
Shuxuan Guo · Jose M. Alvarez · Mathieu Salzmann -
2020 Spotlight: ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks »
Shuxuan Guo · Jose M. Alvarez · Mathieu Salzmann -
2019 Poster: Backpropagation-Friendly Eigendecomposition »
Wei Wang · Zheng Dang · Yinlin Hu · Pascal Fua · Mathieu Salzmann -
2017 Poster: Deep Subspace Clustering Networks »
Pan Ji · Tong Zhang · Hongdong Li · Mathieu Salzmann · Ian Reid -
2016 Poster: Learning the Number of Neurons in Deep Networks »
Jose M. Alvarez · Mathieu Salzmann