Timezone: »
We present a new learning strategy based on an efficient blocked Gibbs sampler for sparse overcomplete linear models. Particular emphasis is placed on statistical image modeling, where overcomplete models have played an important role in discovering sparse representations. Our Gibbs sampler is faster than general purpose sampling schemes while also requiring no tuning as it is free of parameters. Using the Gibbs sampler and a persistent variant of expectation maximization, we are able to extract highly sparse distributions over latent sources from data. When applied to natural images, our algorithm learns source distributions which resemble spike-and-slab distributions. We evaluate the likelihood and quantitatively compare the performance of the overcomplete linear model to its complete counterpart as well as a product of experts model, which represents another overcomplete generalization of the complete linear model. In contrast to previous claims, we find that overcomplete representations lead to significant improvements, but that the overcomplete linear model still underperforms other models.
Author Information
Lucas Theis (Twitter)
Jascha Sohl-Dickstein (Google)
Matthias Bethge (University of Tübingen)
More from the Same Authors
-
2021 : Fast Finite Width Neural Tangent Kernel »
Roman Novak · Jascha Sohl-Dickstein · Samuel Schoenholz -
2023 Poster: RDumb: A simple approach that questions our progress in continual test-time adaptation »
Ori Press · Steffen Schneider · Matthias Kümmerer · Matthias Bethge -
2023 Poster: Modulated Neural ODEs »
Ilze Amanda Auzina · Çağatay Yıldız · Sara Magliacane · Matthias Bethge · Efstratios Gavves -
2023 Poster: Compositional Generalization from First Principles »
Thaddäus Wiedemer · Prasanna Mayilvahanan · Matthias Bethge · Wieland Brendel -
2021 Poster: Reverse engineering learned optimizers reveals known and novel mechanisms »
Niru Maheswaranathan · David Sussillo · Luke Metz · Ruoxi Sun · Jascha Sohl-Dickstein -
2018 : Adversarial Vision Challenge: Results of the Adversarial Vision Challenge »
Wieland Brendel · Jonas Rauber · Marcel Salathé · Alexey Kurakin · Nicolas Papernot · Sharada Mohanty · Matthias Bethge -
2017 : DeepArt competition »
Alexander Ecker · Leon A Gatys · Matthias Bethge -
2017 Poster: Neural system identification for large populations separating “what” and “where” »
David Klindt · Alexander Ecker · Thomas Euler · Matthias Bethge -
2016 : Matthias Bethge - Texture perception in humans and machines »
Matthias Bethge -
2015 Workshop: Statistical Methods for Understanding Neural Systems »
Alyson Fletcher · Jakob H Macke · Ryan Adams · Jascha Sohl-Dickstein -
2015 Poster: Texture Synthesis Using Convolutional Neural Networks »
Leon A Gatys · Alexander Ecker · Matthias Bethge -
2015 Poster: Generative Image Modeling Using Spatial LSTMs »
Lucas Theis · Matthias Bethge -
2010 Poster: Evaluating neuronal codes for inference using Fisher information »
Ralf Haefner · Matthias Bethge -
2009 Poster: Hierarchical Modeling of Local Image Features through $L_p$-Nested Symmetric Distributions »
Fabian H Sinz · Eero Simoncelli · Matthias Bethge -
2009 Poster: Neurometric function analysis of population codes »
Philipp Berens · Sebastian Gerwinn · Alexander S Ecker · Matthias Bethge -
2009 Poster: A joint maximum-entropy model for binary neural population patterns and continuous signals »
Sebastian Gerwinn · Philipp Berens · Matthias Bethge -
2009 Spotlight: A joint maximum-entropy model for binary neural population patterns and continuous signals »
Sebastian Gerwinn · Philipp Berens · Matthias Bethge -
2009 Poster: Bayesian estimation of orientation preference maps »
Jakob H Macke · Sebastian Gerwinn · Leonard White · Matthias Kaschube · Matthias Bethge -
2008 Poster: The Conjoint Effect of Divisive Normalization and Orientation Selectivity on Redundancy Reduction »
Fabian H Sinz · Matthias Bethge -
2008 Spotlight: The Conjoint Effect of Divisive Normalization and Orientation Selectivity on Redundancy Reduction »
Fabian H Sinz · Matthias Bethge -
2007 Oral: Bayesian Inference for Spiking Neuron Models with a Sparsity Prior »
Sebastian Gerwinn · Jakob H Macke · Matthias Seeger · Matthias Bethge -
2007 Spotlight: Near-Maximum Entropy Models for Binary Neural Representations of Natural Images »
Matthias Bethge · Philipp Berens -
2007 Poster: Near-Maximum Entropy Models for Binary Neural Representations of Natural Images »
Matthias Bethge · Philipp Berens -
2007 Poster: Bayesian Inference for Spiking Neuron Models with a Sparsity Prior »
Sebastian Gerwinn · Jakob H Macke · Matthias Seeger · Matthias Bethge -
2007 Poster: Receptive Fields without Spike-Triggering »
Jakob H Macke · Günther Zeck · Matthias Bethge