Timezone: »
Pruning techniques have been successfully used in neural networks to trade accuracy for sparsity. However, the impact of network pruning is not uniform: prior work has shown that the recall for underrepresented classes in a dataset may be more negatively affected. In this work, we study such relative distortions in recall by hypothesizing an intensification effect that is inherent to the model. Namely, that pruning makes recall relatively worse for a class with recall below accuracy and, conversely, that it makes recall relatively better for a class with recall above accuracy. In addition, we propose a new pruning algorithm aimed at attenuating such effect. Through statistical analysis, we have observed that intensification is less severe with our algorithm but nevertheless more pronounced with relatively more difficult tasks, less complex models, and higher pruning ratios. More surprisingly, we conversely observe a de-intensification effect with lower pruning ratios.
Author Information
Aidan Good (Bucknell University)
Jiaqi Lin (Bucknell University)
Xin Yu (University of Utah)
Hannah Sieg
Mikey Fergurson
Shandian Zhe (University of Utah)
Jerzy Wieczorek (Colby College)
Thiago Serra (Bucknell University)
More from the Same Authors
-
2022 Spotlight: Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm »
Aidan Good · Jiaqi Lin · Xin Yu · Hannah Sieg · Mikey Fergurson · Shandian Zhe · Jerzy Wieczorek · Thiago Serra -
2022 Poster: Infinite-Fidelity Coregionalization for Physical Simulation »
Shibo Li · Zheng Wang · Robert Kirby · Shandian Zhe -
2022 Poster: Batch Multi-Fidelity Active Learning with Budget Constraints »
Shibo Li · Jeff M Phillips · Xin Yu · Robert Kirby · Shandian Zhe -
2021 Poster: Self-Adaptable Point Processes with Nonparametric Time Decays »
Zhimeng Pan · Zheng Wang · Jeff M Phillips · Shandian Zhe -
2021 Poster: Characterizing possible failure modes in physics-informed neural networks »
Aditi Krishnapriyan · Amir Gholami · Shandian Zhe · Robert Kirby · Michael Mahoney -
2021 Poster: Batch Multi-Fidelity Bayesian Optimization with Deep Auto-Regressive Networks »
Shibo Li · Robert Kirby · Shandian Zhe -
2020 Poster: Multi-Fidelity Bayesian Optimization via Deep Neural Networks »
Shibo Li · Wei Xing · Robert Kirby · Shandian Zhe -
2018 Poster: Stochastic Nonparametric Event-Tensor Decomposition »
Shandian Zhe · Yishuai Du -
2018 Spotlight: Stochastic Nonparametric Event-Tensor Decomposition »
Shandian Zhe · Yishuai Du