Timezone: »
The leap in performance in state-of-the-art computer vision methods is attributed to the development of deep neural networks. However it often comes at a computational price which may hinder their deployment. To alleviate this limitation, structured pruning is a well known technique which consists in removing channels, neurons or filters, and is commonly applied in order to produce more compact models. In most cases, the computations to remove are selected based on a relative importance criterion. At the same time, the need for explainable predictive models has risen tremendously and motivated the development of robust attribution methods that highlight the relative importance of pixels of an input image or feature map. In this work, we discuss the limitations of existing pruning heuristics, among which magnitude and gradient-based methods. We draw inspiration from attribution methods to design a novel integrated gradient pruning criterion, in which the relevance of each neuron is defined as the integral of the gradient variation on a path towards this neuron removal. Furthermore, We propose an entwined DNN pruning and fine-tuning flowchart to better preserve DNN accuracy while removing parameters. We show through extensive validation on several datasets, architectures as well as pruning scenarios that the proposed method, dubbed SInGE, significantly outperforms existing state-of-the-art DNN pruning methods.
Author Information
Edouard YVINEC (Computer Science Lab - Pierre and Marie Curie University, Paris, France)
Arnaud Dapogny (LIP6)
Matthieu Cord (Sorbonne University)
Kevin Bailly (ISIR, UMR 7222)
More from the Same Authors
-
2022 : Multi-Modal 3D GAN for Urban Scenes »
Loïck Chambon · Mickael Chen · Tuan-Hung VU · Alexandre Boulch · Andrei Bursuc · Matthieu Cord · Patrick Pérez -
2023 Poster: Rewarded soups: towards Pareto-optimality by interpolating weights fine-tuned on diverse rewards »
Alexandre Rame · Guillaume Couairon · Corentin Dancette · Jean-Baptiste Gaya · Mustafa Shukor · Laure Soulier · Matthieu Cord -
2023 Poster: REx: Data-Free Residual Quantization Error Expansion »
Edouard YVINEC · Arnaud Dapogny · Matthieu Cord · Kevin Bailly -
2023 Poster: OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents »
Hugo Laurençon · Lucile Saulnier · Leo Tronchon · Stas Bekman · Amanpreet Singh · Anton Lozhkov · Thomas Wang · Siddharth Karamcheti · Alexander Rush · Douwe Kiela · Matthieu Cord · Victor Sanh -
2022 Poster: Diverse Weight Averaging for Out-of-Distribution Generalization »
Alexandre Rame · Matthieu Kirchmeyer · Thibaud Rahier · Alain Rakotomamonjy · Patrick Gallinari · Matthieu Cord -
2021 Poster: RED : Looking for Redundancies for Data-FreeStructured Compression of Deep Neural Networks »
Edouard YVINEC · Arnaud Dapogny · Matthieu Cord · Kevin Bailly