Timezone: »

Optimal Lottery Tickets via Subset Sum: Logarithmic Over-Parameterization is Sufficient
Ankit Pensia · Shashank Rajput · Alliot Nagle · Harit Vishwakarma · Dimitris Papailiopoulos

Tue Dec 08 08:20 AM -- 08:30 AM (PST) @ Orals & Spotlights: Deep Learning
The strong lottery ticket hypothesis (LTH) postulates that one can approximate any target neural network by only pruning the weights of a sufficiently over-parameterized random network. A recent work by Malach et al. [MYSS20] establishes the first theoretical analysis for the strong LTH: one can provably approximate a neural network of width $d$ and depth $l$, by pruning a random one that is a factor $O(d^4 l^2)$ wider and twice as deep. This polynomial over-parameterization requirement is at odds with recent experimental research that achieves good approximation with networks that are a small factor wider than the target. In this work, we close the gap and offer an exponential improvement to the over-parameterization requirement for the existence of lottery tickets. We show that any target network of width $d$ and depth $l$ can be approximated by pruning a random network that is a factor $O(log(dl))$ wider and twice as deep. Our analysis heavily relies on connecting pruning random ReLU networks to random instances of the Subset Sum problem. We then show that this logarithmic over-parameterization is essentially optimal for constant depth networks. Finally, we verify several of our theoretical insights with experiments.

Author Information

Ankit Pensia (University of Wisconsin-Madison)
Shashank Rajput (University of Wisconsin - Madison)
Alliot Nagle (UW-Madison)

ECE PhD @ UT Austin

Harit Vishwakarma (University of Wisconsin Madison)
Dimitris Papailiopoulos (University of Wisconsin-Madison)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors