Timezone: »
An increasing number of experimental studies indicate that perception encodes a posterior probability distribution over possible causes of sensory stimuli, which is used to act close to optimally in the environment. One outstanding difficulty with this hypothesis is that the exact posterior will in general be too complex to be represented directly, and thus neurons will have to represent an approximation of this distribution. Two influential proposals of efficient posterior representation by neural populations are: 1) neural activity represents samples of the underlying distribution, or 2) they represent a parametric representation of a variational approximation of the posterior. We show that these approaches can be combined for an inference scheme that retains the advantages of both: it is able to represent multiple modes and arbitrary correlations, a feature of sampling methods, and it reduces the represented space to regions of high probability mass, a strength of variational approximations. Neurally, the combined method can be interpreted as a feed-forward preselection of the relevant state space, followed by a neural dynamics implementation of Markov Chain Monte Carlo (MCMC) to approximate the posterior over the relevant states. We demonstrate the effectiveness and efficiency of this approach on a sparse coding model. In numerical experiments on artificial data and image patches, we compare the performance of the algorithms to that of exact EM, variational state space selection alone, MCMC alone, and the combined select and sample approach. The select and sample approach integrates the advantages of the sampling and variational approximations, and forms a robust, neurally plausible, and very efficient model of processing and learning in cortical networks. For sparse coding we show applications easily exceeding a thousand observed and a thousand hidden dimensions.
Author Information
Jacquelyn A Shelton (TU Berlin)
Jörg Bornschein (University of Montreal)
Abdul Saboor Sheikh (Technical University of Berlin)
Pietro Berkes (Brandeis University)
Jörg Lücke (TU Berlin)
More from the Same Authors
-
2013 Poster: What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach »
Zhenwen Dai · Georgios Exarchakis · Jörg Lücke -
2012 Poster: Why MCA? Nonlinear Spike-and-slab Sparse Coding for Neurally Plausible Image Encoding »
Jacquelyn A Shelton · Philip Sterne · Jörg Bornschein · Abdul Saboor Sheikh · Jörg Lücke -
2010 Poster: The Maximal Causes of Natural Scenes are Edge Filters »
Jose G Puertas · Jörg Bornschein · Jörg Lücke -
2009 Poster: Occlusive Components Analysis »
Jörg Lücke · Richard Turner · Maneesh Sahani · Marc Henniges -
2009 Poster: No evidence for active sparsification in the visual cortex »
Pietro Berkes · Ben White · Jozsef Fiser -
2009 Poster: Augmenting Feature-driven fMRI Analyses: Semi-supervised learning and resting state activity »
Matthew B Blaschko · Jacquelyn A Shelton · Andreas Bartels -
2008 Poster: Characterizing neural dependencies with Poisson copula models »
Pietro Berkes · Frank Wood · Jonathan W Pillow -
2008 Spotlight: Characterizing neural dependencies with Poisson copula models »
Pietro Berkes · Frank Wood · Jonathan W Pillow -
2007 Workshop: Beyond Simple Cells: Probabilistic Models for Visual Cortical Processing »
Richard Turner · Pietro Berkes · Maneesh Sahani -
2007 Poster: On Sparsity and Overcompleteness in Image Models »
Pietro Berkes · Richard Turner · Maneesh Sahani