Skip to yearly menu bar Skip to main content


Poster

(Not) Sparse Coding

Drew Bagnell · David M Bradley


Abstract: Prior work has shown that features which appear to be biologically plausible as well as empirically useful can be found by sparse coding with a prior such as a laplacian ($L_1$) ) that promotes sparsity. We show that a prior based on minimizing KL-divergence preserves the benefits of these sparse priors while adding stability to the Maximum A-Posteriori (MAP) estimate that makes it more useful for prediction problems. Additionally, we show how to calculate the derivative of the MAP estimate efficiently with implicit differentiation, and demonstrate how online optimization of the parameters of the KL-regularized model can significantly improve performance on a wide variety of applications.

Live content is unavailable. Log in and register to view live content