Skip to yearly menu bar Skip to main content


Poster

Bayesian latent structure discovery from multi-neuron recordings

Scott Linderman · Ryan Adams · Jonathan Pillow

Area 5+6+7+8 #135

Keywords: [ (Other) Probabilistic Models and Methods ] [ (Other) Neuroscience ] [ (Cognitive/Neuroscience) Neural Coding ]


Abstract:

Neural circuits contain heterogeneous groups of neurons that differ in type, location, connectivity, and basic response properties. However, traditional methods for dimensionality reduction and clustering are ill-suited to recovering the structure underlying the organization of neural circuits. In particular, they do not take advantage of the rich temporal dependencies in multi-neuron recordings and fail to account for the noise in neural spike trains. Here we describe new tools for inferring latent structure from simultaneously recorded spike train data using a hierarchical extension of a multi-neuron point process model commonly known as the generalized linear model (GLM). Our approach combines the GLM with flexible graph-theoretic priors governing the relationship between latent features and neural connectivity patterns. Fully Bayesian inference via PĆ³lya-gamma augmentation of the resulting model allows us to classify neurons and infer latent dimensions of circuit organization from correlated spike trains. We demonstrate the effectiveness of our method with applications to synthetic data and multi-neuron recordings in primate retina, revealing latent patterns of neural types and locations from spike trains alone.

Live content is unavailable. Log in and register to view live content