Modeling dynamic neural activity by combining naturalistic video stimuli and stimulus-independent latent factors
Abstract
Understanding how the brain processes dynamic natural stimuli remains a fundamentalchallenge in neuroscience. Current dynamic neural encoding models either take stimulias input but ignore shared variability in neural responses, or they model this variabilityby deriving latent embeddings from neural responses or behavior while ignoring the visualinput. To address this gap, we propose a probabilistic model that incorporates video inputsalong with stimulus-independent latent factors to capture variability in neuronal responses,predicting a joint distribution for the entire population. After training and testing ourmodel on mouse V1 neuronal responses, we found that it outperforms video-only models interms of log-likelihood and achieves further improvements when conditioned on responsesfrom other neurons. Furthermore, we find that the learned latent factors strongly correlatewith mouse behavior, although the model was trained without behavior data.