Timezone: »

 
Poster
Bayesian Compression for Deep Learning
Christos Louizos · Karen Ullrich · Max Welling

Mon Dec 04 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #137

Compression and computational efficiency in deep learning have become a problem of great significance. In this work, we argue that the most principled and effective way to attack this problem is by adopting a Bayesian point of view, where through sparsity inducing priors we prune large parts of the network. We introduce two novelties in this paper: 1) we use hierarchical priors to prune nodes instead of individual weights, and 2) we use the posterior uncertainties to determine the optimal fixed point precision to encode the weights. Both factors significantly contribute to achieving the state of the art in terms of compression rates, while still staying competitive with methods designed to optimize for speed or energy efficiency.

Author Information

Christos Louizos (University of Amsterdam)
Karen Ullrich (University of Amsterdam)

Research scientist (s/h) at FAIR NY + collab. w/ Vector Institute. ❤️ Deep Learning + Information Theory. Previously, Machine Learning PhD at UoAmsterdam.

Max Welling (University of Amsterdam and University of California Irvine and CIFAR)

More from the Same Authors