Timezone: »
Latent variable models are ubiquitous in the exploratory analysis of neural population recordings, where they allow researchers to summarize the activity of large populations of neurons in lower dimensional ‘latent’ spaces. Existing methods can generally be categorized into (i) Bayesian methods that facilitate flexible incorporation of prior knowledge and uncertainty estimation, but which typically do not scale to large datasets; and (ii) highly parameterized methods without explicit priors that scale better but often struggle in the low-data regime. Here, we bridge this gap by developing a fully Bayesian yet scalable version of Gaussian process factor analysis (bGPFA), which models neural data as arising from a set of inferred latent processes with a prior that encourages smoothness over time. Additionally, bGPFA uses automatic relevance determination to infer the dimensionality of neural activity directly from the training data during optimization. To enable the analysis of continuous recordings without trial structure, we introduce a novel variational inference strategy that scales near-linearly in time and also allows for non-Gaussian noise models appropriate for electrophysiological recordings. We apply bGPFA to continuous recordings spanning 30 minutes with over 14 million data points from primate motor and somatosensory cortices during a self-paced reaching task. We show that neural activity progresses from an initial state at target onset to a reach- specific preparatory state well before movement onset. The distance between these initial and preparatory latent states is predictive of reaction times across reaches, suggesting that such preparatory dynamics have behavioral relevance despite the lack of externally imposed delay periods. Additionally, bGPFA discovers latent processes that evolve over slow timescales on the order of several seconds and contain complementary information about reaction time. These timescales are longer than those revealed by methods which focus on individual movement epochs and may reflect fluctuations in e.g. task engagement.
Author Information
Kristopher Jensen (University of Cambridge)
Ta-Chu Kao (Gatsby Unit, UCL)
Jasmine Stone (Cambridge University)
Guillaume Hennequin (University of Cambridge)
More from the Same Authors
-
2022 : Panel Discussion II: Geometric and topological principles for representations in the brain »
Bruno Olshausen · Kristopher Jensen · Gabriel Kreiman · Manu Madhav · Christian A Shewmake -
2022 : Generative models of non-Euclidean neural population dynamics »
Kristopher Jensen -
2021 Poster: Natural continual learning: success is a journey, not (just) a destination »
Ta-Chu Kao · Kristopher Jensen · Gido van de Ven · Alberto Bernacchia · Guillaume Hennequin -
2020 Poster: Manifold GPLVMs for discovering non-Euclidean latent structure in neural data »
Kristopher Jensen · Ta-Chu Kao · Marco Tripodi · Guillaume Hennequin -
2020 Poster: Non-reversible Gaussian processes for identifying latent dynamical structure in neural data »
Virginia Rutten · Alberto Bernacchia · Maneesh Sahani · Guillaume Hennequin -
2020 Oral: Non-reversible Gaussian processes for identifying latent dynamical structure in neural data »
Virginia Rutten · Alberto Bernacchia · Maneesh Sahani · Guillaume Hennequin -
2018 Poster: Exact natural gradient in deep linear networks and its application to the nonlinear case »
Alberto Bernacchia · Mate Lengyel · Guillaume Hennequin