Skip to yearly menu bar Skip to main content


Poster

How Sparse Can We Prune A Deep Network: A Fundamental Limit Perspective

Qiaozhe Zhang · Ruijie Zhang · Jun Sun · Yingzhuang Liu

East Exhibit Hall A-C #2201
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Network pruning is a commonly used measure to alleviate the storage and computational burden of deep neural networks. However, the fundamental limit of network pruning is still lacking. To close the gap, in this work we'll take a first-principles approach, i.e. we'll directly impose the sparsity constraint on the loss function and leverage the framework of statistical dimension in convex geometry, thus we're able to characterize the sharp phase transition point, i.e. the fundamental limit of the pruning ratio. Through this fundamental limit, we're able to identify two key factors that determine the pruning ratio limit, namely, weight magnitude and network flatness. Generally speaking, the flatter the loss landscape or the smaller the weight magnitude, the smaller pruning ratio. Moreover, we provide efficient countermeasures to address the challenges in the computation of the pruning limit, which involves accurate spectrum estimation of a large-scale and non-positive Hessian matrix. Moreover, through the lens of the pruning ratio threshold, we can provide rigorous interpretations on several heuristics in existing pruning algorithms. Extensive experiments are performed that demonstrate that our theoretical pruning ratio threshold coincides very well with the experiments. All codes are available at: https://anonymous.4open.science/r/Global-One-shot-Pruning-BC7B

Live content is unavailable. Log in and register to view live content