Timezone: »

Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm
Aidan Good · Jiaqi Lin · Xin Yu · Hannah Sieg · Mikey Fergurson · Shandian Zhe · Jerzy Wieczorek · Thiago Serra

Thu Dec 08 09:00 AM -- 11:00 AM (PST) @

Pruning techniques have been successfully used in neural networks to trade accuracy for sparsity. However, the impact of network pruning is not uniform: prior work has shown that the recall for underrepresented classes in a dataset may be more negatively affected. In this work, we study such relative distortions in recall by hypothesizing an intensification effect that is inherent to the model. Namely, that pruning makes recall relatively worse for a class with recall below accuracy and, conversely, that it makes recall relatively better for a class with recall above accuracy. In addition, we propose a new pruning algorithm aimed at attenuating such effect. Through statistical analysis, we have observed that intensification is less severe with our algorithm but nevertheless more pronounced with relatively more difficult tasks, less complex models, and higher pruning ratios. More surprisingly, we conversely observe a de-intensification effect with lower pruning ratios.

Author Information

Aidan Good (Bucknell University)
Jiaqi Lin (Bucknell University)
Xin Yu (University of Utah)
Hannah Sieg
Mikey Fergurson
Shandian Zhe (University of Utah)
Jerzy Wieczorek (Colby College)
Thiago Serra (Bucknell University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors