### Workshop

## Statistical Methods for Understanding Neural Systems

### Alyson Fletcher · Jakob H Macke · Ryan Adams · Jascha Sohl-Dickstein

##### 511 f

Fri 11 Dec, 5:30 a.m. PST

8:15 Opening remarks and welcome

8:30 Surya Ganguli Towards a theory of high dimensional, single trial neural data analysis:

On the role of random projections and phase transitions

9:00 Katherine Heller Translating between human & animal studies via

Bayesian multi-task learning

9:30 Mitya Chklovskii Similarity matching: A new theory of neural computation

10:00 Coffee break 1

10:30 Poster Session 1

11:00 Matthias Bethge Let's compete—benchmarking models in neuroscience

11:30 Yoshua Bengio Small Steps Towards Biologically Plausible Deep Learning

12:00 Lunch

2:30 Pulkit Agrawal The Human Visual Hierarchy is Isomorphic to the Hierarchy learned

by a Deep Convolutional Neural Network Trained for Object Recognition

3:00 Yann Lecun Unsupervised Learning

3:30 Poster Session 2

4:00 Coffee break 2

4:30 Neil Lawrence The Mechanistic Fallacy and Modelling how we Think

5:00 Panel: Deep Learning and neuroscience:

What can brains tell us about massive computing and vice versa?

Yoshua Bengio, Matthias Bethge, Surya Ganguli, Konrad Kording, Yann Lecun, Neil Lawrence

6:00 Wrap up

Posters

Pulkit Agrawal, Mark D. Lescroart, Dustin E. Stansbury, Jitendra Malik, & Jack L. Gallant : The Human Visual Hierarchy is Isomorphic to the Hierarchy learned by a Deep Convolutional Neural Network Trained for Object Recognition

Christian Donner and Hideaki Shimazaki: Approximation methods for inferring time-varying interactions of a large neural population

Alexey Dosovitskiy and Thomas Brox: Inverting Convolutional Networks with Convolutional Networks

Johannes Friedrich, Daniel Soudry, Yu Mu, Jeremy Freeman, Misha Ahrens, and Liam Paninski: Fast Constrained Non-negative Matrix Factorization for Whole-Brain Calcium Imaging Data

Amin Karbasi, Amir Hesam Salavati, and Martin Vetterli: Learning Network Structures from Firing Patterns

Jesse A. Livezey, Gopala K. Anumanchipalli, Brian Cheung, Prabhat, Friedrich T. Sommer, Michael R. DeWeese, Kristofer E. Bouchard, and Edward F. Chang: Classifying spoken syllables from human sensorimotor cortex with deep networks

Gonzalo Mena, Lauren Grosberg, Frederick Kellison-Linn, E.J. Chichilnisky, and Liam Paninski: Large-scale Multi Electrode Array Spike Sorting Algorithm Introducing Concurrent Recording and Stimulation

Jonathan Platkiewicz and Asohan Amarasingham: Monosynaptic Connection Test for Pairwise Extracellular Spike Data

Akshay Rangamani, Jacob Harer, Amit Sinha, Alik Widge, Emad Eskandar, Darin Dougherty, Ishita Basu, Sydney Cash, Angelique Paulk, Trac D. Tran, and Sang (Peter) Chin: Modeling Local Field Potentials with Recurrent Neural Networks

Maja Rudolph and David Blei: The Dirichlet-Gamma Filter for Discovery of Neural Ensembles and their Temporal Dynamics

Organizers

Recent advances in neural recording technologies, including calcium imaging and high-density electrode arrays, have made it possible to simultaneously record neural activity from large populations of neurons for extended periods of time. These developments promise unprecedented insights into the collective dynamics of neural populations and thereby the underpinnings of brain-like computation. However, this new large-scale regime for neural data brings significant methodological challenges. This workshop seeks to explore the statistical methods and theoretical tools that will be necessary to study these data, build new models of neural dynamics, and increase our understanding of the underlying computation. We have invited researchers across a range of disciplines in statistics, applied physics, machine learning, and both theoretical and experimental neuroscience, with the goal of fostering interdisciplinary insights. We hope that active discussions among these groups can set in motion new collaborations and facilitate future breakthroughs on fundamental research problems.

The workshop will focus on three central questions:

a) How can we deal with incomplete data in a principled manner? In most experimental settings, even advanced neural recording methods can only sample a small fraction of all neurons that might be involved in a task, and the observations are often indirect and noisy. As a result, many recordings are from neurons that receive inputs from neurons that are not themselves directly observed, at least not over the same time period. How can we deal with this `incomplete data' problem in a principled manner? How does this sparsity of recordings influence what we can and cannot infer about neural dynamics and mechanisms?

b) How can we incorporate existing models of neural dynamics into neural data analysis? Theoretical neuroscientists have intensely studied neural population dynamics for decades, resulting in a plethora of models of neural population dynamics. However, most analysis methods for neural data do not directly incorporate any models of neural dynamics, but rather build on generic methods for dimensionality reduction or time-series modelling. How can we incorporate existing models of neural dynamics? Conversely, how can we design neural data analysis methods such that they explicitly constrain models of neural dynamics?

c) What synergies are there between analyzing biological and artificial neural systems? The rise of ‘deep learning’ methods has shown that hard computational problems can be solved by machine learning algorithms that are built by cascading many nonlinear units. Although artificial neural systems are fully observable, it has proven challenging to provide a theoretical understanding of how they solve computational problems and which features of a neural network are critical for its performance. While such ‘deep networks’ differ from biological neural networks in many ways, they provide an interesting testing ground for evaluating strategies for understanding neural processing systems. Are there synergies between analysis methods for analyzing biological and artificial neural systems? Has the resurgence of deep learning resulted in new hypotheses or strategies for trying to understand biological neural networks?

Live content is unavailable. Log in and register to view live content