`

Timezone: »

 
Poster
Sparse Flows: Pruning Continuous-depth Models
Lucas Liebenwein · Ramin Hasani · Alexander Amini · Daniela Rus

Fri Dec 10 08:30 AM -- 10:00 AM (PST) @ None #None

Continuous deep learning architectures enable learning of flexible probabilistic models for predictive modeling as neural ordinary differential equations (ODEs), and for generative modeling as continuous normalizing flows. In this work, we design a framework to decipher the internal dynamics of these continuous depth models by pruning their network architectures. Our empirical results suggest that pruning improves generalization for neural ODEs in generative modeling. We empirically show that the improvement is because pruning helps avoid mode-collapse and flatten the loss surface. Moreover, pruning finds efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy. We hope our results will invigorate further research into the performance-size trade-offs of modern continuous-depth models.

Author Information

Lucas Liebenwein (Massachusetts Institute of Technology)
Ramin Hasani (MIT)
Alexander Amini (MIT)
Daniela Rus (Massachusetts Institute of Technology)

More from the Same Authors