Skip to yearly menu bar Skip to main content


Poster

Discovering Sparsity Allocation for Layer-wise Pruning of Large Language Models

Lujun Li · Peijie Dong · Zhenheng Tang · Xiang Liu · Qiang Wang · Wenhan Luo · Wei Xue · Qifeng Liu · Xiaowen Chu · Yike Guo

East Exhibit Hall A-C #2606
[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: In this paper, we present DSA, the first automated framework for discovering sparsity allocation schemes for layer-wise pruning in Large Language Models (LLMs). LLMs have become increasingly powerful, but their large parameter counts make them computationally expensive. Existing pruning methods for compressing LLMs primarily focus on evaluating redundancies and removing element-wise weights. However, these methods fail to allocate adaptive layer-wise sparsities, leading to performance degradation in challenging tasks. We observe that per-layer importance statistics can serve as allocation indications, but their effectiveness depends on the allocation function between layers. To address this issue, we develop an expression discovery framework to explore potential allocation strategies. Our allocation functions involve two steps: reducing element-wise metrics to per-layer importance scores, and modelling layer importance to sparsity ratios. To search for the most effective allocation function, we construct a search space consisting of pre-process, reduction, transform, and post-process operations. We leverage an evolutionary algorithm to perform crossover and mutation on superior candidates within the population, guided by performance evaluation. Finally, we seamlessly integrate our discovered functions into various uniform methods, resulting in significant performance improvements. We conduct extensive experiments on multiple challenging tasks such as arithmetic, knowledge reasoning, and multimodal benchmarks spanning GSM8K, MMLU, SQA, and VQA, demonstrating that our DSA method achieves significant performance gains on the LLaMA-1|2|3, Mistral, and OPT models. Notably, the LLaMA-1|2|3 model pruned by our DSA reach $7.48$\%\|$5.69$\%\|$14.14$\% gains on the state-of-the-art models Wanda and SparseGPT. Codes are available in the supplementary materials.

Live content is unavailable. Log in and register to view live content