Poster
Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee
Alireza Aghasi · Afshin Abdi · Nam Nguyen · Justin Romberg
Pacific Ballroom #215
Keywords: [ Convex Optimization ] [ Deep Learning ] [ Theory ] [ Sparsity and Compressed Sensing ] [ Regularization ]
[
Abstract
]
Abstract:
We introduce and analyze a new technique for model reduction for deep neural networks. While large networks are theoretically capable of learning arbitrarily complex models, overfitting and model redundancy negatively affects the prediction accuracy and model variance. Our Net-Trim algorithm prunes (sparsifies) a trained network layer-wise, removing connections at each layer by solving a convex optimization program. This program seeks a sparse set of weights at each layer that keeps the layer inputs and outputs consistent with the originally trained model. The algorithms and associated analysis are applicable to neural networks operating with the rectified linear unit (ReLU) as the nonlinear activation. We present both parallel and cascade versions of the algorithm. While the latter can achieve slightly simpler models with the same generalization performance, the former can be computed in a distributed manner. In both cases, Net-Trim significantly reduces the number of connections in the network, while also providing enough regularization to slightly reduce the generalization error. We also provide a mathematical analysis of the consistency between the initial network and the retrained model. To analyze the model sample complexity, we derive the general sufficient conditions for the recovery of a sparse transform matrix. For a single layer taking independent Gaussian random vectors as inputs, we show that if the network response can be described using a maximum number of $s$ non-zero weights per node, these weights can be learned from $O(s\log N)$ samples.
Live content is unavailable. Log in and register to view live content