Skip to yearly menu bar Skip to main content


Poster

Linear dynamical neural population models through nonlinear embeddings

Yuanjun Gao · Evan Archer · Liam Paninski · John Cunningham

Area 5+6+7+8 #109

Keywords: [ Time Series Analysis ] [ (Cognitive/Neuroscience) Neural Coding ] [ (Other) Neuroscience ] [ Nonlinear Dimension Reduction and Manifold Learning ] [ Variational Inference ] [ (Other) Unsupervised Learning Methods ]


Abstract:

A body of recent work in modeling neural activity focuses on recovering low- dimensional latent features that capture the statistical structure of large-scale neural populations. Most such approaches have focused on linear generative models, where inference is computationally tractable. Here, we propose fLDS, a general class of nonlinear generative models that permits the firing rate of each neuron to vary as an arbitrary smooth function of a latent, linear dynamical state. This extra flexibility allows the model to capture a richer set of neural variability than a purely linear model, but retains an easily visualizable low-dimensional latent space. To fit this class of non-conjugate models we propose a variational inference scheme, along with a novel approximate posterior capable of capturing rich temporal correlations across time. We show that our techniques permit inference in a wide class of generative models.We also show in application to two neural datasets that, compared to state-of-the-art neural population models, fLDS captures a much larger proportion of neural variability with a small number of latent dimensions, providing superior predictive performance and interpretability.

Live content is unavailable. Log in and register to view live content