Timezone: »
Nowadays, the number of layers and of neurons in each layer of a deep network are typically set manually. While very deep and wide networks have proven effective in general, they come at a high memory and computation cost, thus making them impractical for constrained platforms. These networks, however, are known to have many redundant parameters, and could thus, in principle, be replaced by more compact architectures. In this paper, we introduce an approach to automatically determining the number of neurons in each layer of a deep network during learning. To this end, we propose to make use of a group sparsity regularizer on the parameters of the network, where each group is defined to act on a single neuron. Starting from an overcomplete network, we show that our approach can reduce the number of parameters by up to 80\% while retaining or even improving the network accuracy.
Author Information
Jose M. Alvarez (NICTA)
Mathieu Salzmann (EPFL)
More from the Same Authors
-
2021 : SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation »
Robin Chan · Krzysztof Lis · Svenja Uhlemeyer · Hermann Blum · Sina Honari · Roland Siegwart · Pascal Fua · Mathieu Salzmann · Matthias Rottmann -
2022 Poster: Contact-aware Human Motion Forecasting »
Wei Mao · miaomiao Liu · Richard I Hartley · Mathieu Salzmann -
2022 Spotlight: Lightning Talks 4B-3 »
Zicheng Zhang · Mancheng Meng · Antoine Guedon · Yue Wu · Wei Mao · Zaiyu Huang · Peihao Chen · Shizhe Chen · yongwei chen · Keqiang Sun · Yi Zhu · chen rui · Hanhui Li · Dongyu Ji · Ziyan Wu · miaomiao Liu · Pascal Monasse · Yu Deng · Shangzhe Wu · Pierre-Louis Guhur · Jiaolong Yang · Kunyang Lin · Makarand Tapaswi · Zhaoyang Huang · Terrence Chen · Jiabao Lei · Jianzhuang Liu · Vincent Lepetit · Zhenyu Xie · Richard I Hartley · Dinggang Shen · Xiaodan Liang · Runhao Zeng · Cordelia Schmid · Michael Kampffmeyer · Mathieu Salzmann · Ning Zhang · Fangyun Wei · Yabin Zhang · Fan Yang · Qifeng Chen · Wei Ke · Quan Wang · Thomas Li · qingling Cai · Kui Jia · Ivan Laptev · Mingkui Tan · Xin Tong · Hongsheng Li · Xiaodan Liang · Chuang Gan -
2022 Spotlight: Contact-aware Human Motion Forecasting »
Wei Mao · miaomiao Liu · Richard I Hartley · Mathieu Salzmann -
2022 Poster: Robust Binary Models by Pruning Randomly-initialized Networks »
Chen Liu · Ziqi Zhao · Sabine Süsstrunk · Mathieu Salzmann -
2021 Poster: Distilling Image Classifiers in Object Detectors »
Shuxuan Guo · Jose M. Alvarez · Mathieu Salzmann -
2021 Poster: Learning Transferable Adversarial Perturbations »
Krishna kanth Nakka · Mathieu Salzmann -
2020 Poster: On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them »
Chen Liu · Mathieu Salzmann · Tao Lin · Ryota Tomioka · Sabine Süsstrunk -
2020 Poster: ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks »
Shuxuan Guo · Jose M. Alvarez · Mathieu Salzmann -
2020 Spotlight: ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks »
Shuxuan Guo · Jose M. Alvarez · Mathieu Salzmann -
2019 Poster: Backpropagation-Friendly Eigendecomposition »
Wei Wang · Zheng Dang · Yinlin Hu · Pascal Fua · Mathieu Salzmann -
2017 Poster: Compression-aware Training of Deep Networks »
Jose Alvarez · Mathieu Salzmann -
2017 Poster: Deep Subspace Clustering Networks »
Pan Ji · Tong Zhang · Hongdong Li · Mathieu Salzmann · Ian Reid