Timezone: »
We introduce an approach to training a given compact network. To this end, we leverage over-parameterization, which typically improves both neural network optimization and generalization. Specifically, we propose to expand each linear layer of the compact network into multiple consecutive linear layers, without adding any nonlinearity. As such, the resulting expanded network, or ExpandNet, can be contracted back to the compact one algebraically at inference. In particular, we introduce two convolutional expansion strategies and demonstrate their benefits on several tasks, including image classification, object detection, and semantic segmentation. As evidenced by our experiments, our approach outperforms both training the compact network from scratch and performing knowledge distillation from a teacher. Furthermore, our linear over-parameterization empirically reduces gradient confusion during training and improves the network generalization.
Author Information
Shuxuan Guo (EPFL)
Jose M. Alvarez (NVIDIA)
Mathieu Salzmann (EPFL)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Poster: ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks »
Tue. Dec 8th 05:00 -- 07:00 PM Room Poster Session 1 #327
More from the Same Authors
-
2021 : SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation »
Robin Chan · Krzysztof Lis · Svenja Uhlemeyer · Hermann Blum · Sina Honari · Roland Siegwart · Pascal Fua · Mathieu Salzmann · Matthias Rottmann -
2021 : Object-Level Targeted Selection via Deep Template Matching »
Suraj Kothawade · Michele Fenzi · Elmar Haussmann · Jose M. Alvarez · Christoph Angerer -
2022 Poster: Contact-aware Human Motion Forecasting »
Wei Mao · miaomiao Liu · Richard I Hartley · Mathieu Salzmann -
2023 Poster: SE(3) Diffusion Model-based Point Cloud Registration for Robust 6D Object Pose Estimation »
Haobo Jiang · Mathieu Salzmann · Zheng Dang · Jin Xie · Jian Yang -
2022 Spotlight: Lightning Talks 4B-3 »
Zicheng Zhang · Mancheng Meng · Antoine Guedon · Yue Wu · Wei Mao · Zaiyu Huang · Peihao Chen · Shizhe Chen · Yongwei Chen · Keqiang Sun · Yi Zhu · chen rui · Hanhui Li · Dongyu Ji · Ziyan Wu · miaomiao Liu · Pascal Monasse · Yu Deng · Shangzhe Wu · Pierre-Louis Guhur · Jiaolong Yang · Kunyang Lin · Makarand Tapaswi · Zhaoyang Huang · Terrence Chen · Jiabao Lei · Jianzhuang Liu · Vincent Lepetit · Zhenyu Xie · Richard I Hartley · Dinggang Shen · Xiaodan Liang · Runhao Zeng · Cordelia Schmid · Michael Kampffmeyer · Mathieu Salzmann · Ning Zhang · Fangyun Wei · Yabin Zhang · Fan Yang · Qifeng Chen · Wei Ke · Quan Wang · Thomas Li · qingling Cai · Kui Jia · Ivan Laptev · Mingkui Tan · Xin Tong · Hongsheng Li · Xiaodan Liang · Chuang Gan -
2022 Spotlight: Contact-aware Human Motion Forecasting »
Wei Mao · miaomiao Liu · Richard I Hartley · Mathieu Salzmann -
2022 Poster: Robust Binary Models by Pruning Randomly-initialized Networks »
Chen Liu · Ziqi Zhao · Sabine Süsstrunk · Mathieu Salzmann -
2021 Poster: Distilling Image Classifiers in Object Detectors »
Shuxuan Guo · Jose M. Alvarez · Mathieu Salzmann -
2021 Poster: Learning Transferable Adversarial Perturbations »
Krishna kanth Nakka · Mathieu Salzmann -
2020 Poster: On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them »
Chen Liu · Mathieu Salzmann · Tao Lin · Ryota Tomioka · Sabine Süsstrunk -
2019 Poster: Backpropagation-Friendly Eigendecomposition »
Wei Wang · Zheng Dang · Yinlin Hu · Pascal Fua · Mathieu Salzmann -
2017 Poster: Compression-aware Training of Deep Networks »
Jose Alvarez · Mathieu Salzmann -
2017 Poster: Deep Subspace Clustering Networks »
Pan Ji · Tong Zhang · Hongdong Li · Mathieu Salzmann · Ian Reid -
2016 Poster: Learning the Number of Neurons in Deep Networks »
Jose M. Alvarez · Mathieu Salzmann