Timezone: »
Recent years have witnessed an explosion of progress in AI. With it, a proliferation of experts and practitioners are pushing the boundaries of the field without regard to the brain. This is in stark contrast with the field's transdisciplinary origins, when interest in designing intelligent algorithms was shared by neuroscientists, psychologists and computer scientists alike. Similar progress has been made in neuroscience where novel experimental techniques now afford unprecedented access to brain activity and function. However, it is unclear how to maximize them to truly advance an end-to-end understanding of biological intelligence. The traditional neuroscience research program, however, lacks frameworks to truly advance an end-to-end understanding of biological intelligence. For the first time, mechanistic discoveries emerging from deep learning, reinforcement learning and other AI fields may be able to steer fundamental neuroscience research in ways beyond standard uses of machine learning for modelling and data analysis. For example, successful training algorithms in artificial networks, developed without biological constraints, can motivate research questions and hypotheses about the brain. Conversely, a deeper understanding of brain computations at the level of large neural populations may help shape future directions in AI. This workshop aims to address this novel situation by building on existing AI-Neuro relationships but, crucially, outline new directions for artificial systems and next-generation neuroscience experiments. We invite contributions concerned with the modern intersection between neuroscience and AI and in particular, addressing questions that can only now be tackled due to recent progress in AI on the role of recurrent dynamics, inductive biases to guide learning, global versus local learning rules, and interpretability of network activity. This workshop will promote discussion and showcase diverse perspectives on these open questions.
Sat 8:15 a.m. - 8:30 a.m.
|
Opening Remarks
(
announcements
)
|
Guillaume Lajoie · Jessica Thompson · Maximilian Puelma Touzel · Eli Shlizerman · Konrad Kording 🔗 |
Sat 8:30 a.m. - 9:00 a.m.
|
Invited Talk: Hierarchical Reinforcement Learning: Computational Advances and Neuroscience Connections
(
talk
)
|
Doina Precup 🔗 |
Sat 9:00 a.m. - 9:30 a.m.
|
Invited Talk: Deep learning without weight transport
(
talk
)
Recent advances in machine learning have been made possible by employing the backpropagation-of-error algorithm. Backprop enables the delivery of detailed error feedback across multiple layers of representation to adjust synaptic weights, allowing us to effectively train even very large networks. Whether or not the brain employs similar deep learning algorithms remains contentious; how it might do so remains a mystery. In particular, backprop uses the weights in the forward pass of the network to precisely compute error feedback in the backward pass. This way of computing errors across multiple layers is fundamentally at odds with what we know about the local computations of brains. We will describe new proposals for biologically motivated learning algorithms that are as effective as backpropagation without requiring weight transport. |
Timothy Lillicrap 🔗 |
Sat 9:30 a.m. - 9:45 a.m.
|
Contributed talk: Eligibility traces provide a data-inspired alternative to backpropagation through time. Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Anand Subramoney, Robert Legenstein, Wolfgang Maass
(
talk
)
|
🔗 |
Sat 9:45 a.m. - 10:30 a.m.
|
Coffee Break + Posters
|
🔗 |
Sat 10:30 a.m. - 11:00 a.m.
|
Invited Talk: Computing and learning in the presence of neural noise
(
talk
)
One key distinction between artificial and biological neural networks is the presence of noise, both intrinsic, e.g. due to synaptic failures, and extrinsic, arising through complex recurrent dynamics. Traditionally, this noise has been viewed as a ‘bug’, and the main computational challenge that the brain needs to face. More recently, it has been argued that circuit stochasticity may be a ‘feature', in that can be recruited for useful computations, such as representing uncertainty about the state of the world. Here we lay out a new argument for the role of stochasticity during learning. In particular, we use a mathematically tractable stochastic neural network model that allows us to derive local plasticity rules for optimizing a given global objective. This rule leads to representations that reflect both task structure and stimuli priors in interesting ways. Moreover, in this framework stochasticity is both a feature, as learning cannot happen in the absence of noise, and a bug, as the noise corrupts neural representations. Importantly, the network learns to use recurrent interactions to compensate for its negative effects, and maintain robust circuit function. |
Cristina Savin 🔗 |
Sat 11:00 a.m. - 11:30 a.m.
|
Invited Talk: Universality and individuality in neural dynamics across large populations of recurrent networks
(
talk
)
|
David Sussillo 🔗 |
Sat 11:30 a.m. - 11:45 a.m.
|
Contributed talk: How well do deep neural networks trained on object recognition characterize the mouse visual system? Santiago A. Cadena, Fabian H. Sinz, Taliah Muhammad, Emmanouil Froudarakis, Erick Cobos, Edgar Y. Walker, Jake Reimer, Matthias Bethge,
(
talk
)
|
🔗 |
Sat 11:45 a.m. - 12:00 p.m.
|
Contributed talk: Functional Annotation of Human Cognitive States using Graph Convolution Networks Yu Zhang, Pierre Bellec
(
talk
)
|
🔗 |
Sat 12:00 p.m. - 2:00 p.m.
|
Lunch Break
|
🔗 |
Sat 2:00 p.m. - 2:30 p.m.
|
Invited Talk: Simultaneous rigidity and flexibility through modularity in cognitive maps for navigation
(
talk
)
|
Ila Fiete 🔗 |
Sat 2:30 p.m. - 3:00 p.m.
|
Invited Talk: Theories for the emergence of internal representations in neural networks: from perception to navigation
(
talk
)
|
Surya Ganguli 🔗 |
Sat 3:00 p.m. - 3:15 p.m.
|
Contributed talk: Adversarial Training of Neural Encoding Models on Population Spike Trains Poornima Ramesh, Mohamad Atayi, Jakob H Macke
(
talk
)
|
🔗 |
Sat 3:15 p.m. - 3:30 p.m.
|
Contributed talk: Learning to Learn with Feedback and Local Plasticity. Jack Lindsey
(
talk
)
|
🔗 |
Sat 3:30 p.m. - 4:15 p.m.
|
Coffee Break + Posters
|
🔗 |
Sat 4:15 p.m. - 4:45 p.m.
|
Poster Session
(
posters
)
|
Pravish Sainath · Mohamed Akrout · Charles Delahunt · Nathan Kutz · Guangyu Robert Yang · Joseph Marino · L F Abbott · Nicolas Vecoven · Damien Ernst · andrew warrington · Michael Kagan · Kyunghyun Cho · Kameron Harris · Leopold Grinberg · John J. Hopfield · Dmitry Krotov · Taliah Muhammad · Erick Cobos · Edgar Walker · Jacob Reimer · Andreas Tolias · Alexander Ecker · Janaki Sheth · Yu Zhang · Maciej Wołczyk · Jacek Tabor · Szymon Maszke · Roman Pogodin · Dane Corneil · Wulfram Gerstner · Baihan Lin · Guillermo Cecchi · Jenna M Reinen · Irina Rish · Guillaume Bellec · Darjan Salaj · Anand Subramoney · Wolfgang Maass · Yueqi Wang · Ari Pakman · Jin Hyung Lee · Liam Paninski · Bryan Tripp · Colin Graber · Alex Schwing · Luke Prince · Gabriel Ocker · Michael Buice · Benjamin Lansdell · Konrad Kording · Jack Lindsey · Terrence Sejnowski · Matthew Farrell · Eric Shea-Brown · Nicolas Farrugia · Victor Nepveu · Jiwoong Im · Kristin Branson · Brian Hu · Ramakrishnan Iyer · Stefan Mihalas · Sneha Aenugu · Hananel Hazan · Sihui Dai · Tan Nguyen · Doris Tsao · Richard Baraniuk · Anima Anandkumar · Hidenori Tanaka · Aran Nayebi · Stephen Baccus · Surya Ganguli · Dean Pospisil · Eilif Muller · Jeffrey S Cheng · Gaël Varoquaux · Kamalaker Dadi · Dimitrios C Gklezakos · Rajesh PN Rao · Anand Louis · Christos Papadimitriou · Santosh Vempala · Naganand Yadati · Daniel Zdeblick · Daniela M Witten · Nicholas Roberts · Vinay Prabhu · Pierre Bellec · Poornima Ramesh · Jakob H Macke · Santiago Cadena · Guillaume Bellec · Franz Scherr · Owen Marschall · Robert Kim · Hannes Rapp · Marcio Fonseca · Oliver Armitage · Jiwoong Im · Thomas Hardcastle · Abhishek Sharma · Wyeth Bair · Adrian Valente · Shane Shang · Merav Stern · Rutuja Patil · Peter Wang · Sruthi Gorantla · Peter Stratton · Tristan Edwards · Jialin Lu · Martin Ester · Yurii Vlasov · Siavash Golkar
|
Sat 4:45 p.m. - 5:15 p.m.
|
Invited Talk: Sensory prediction error signals in the neocortex
(
talk
)
Many models have postulated that the neocortex implements hierarchical inference system, whereby each region sends predictions of the inputs it expects to lower-order regions, allowing the latter to learn from any prediction errors. The combining of top-down predictions with bottom-up sensory information to generate errors that can then be communicated across the hierarchy is critical to credit assignment in deep predictive learning algorithms. Indirect experimental evidence supporting a hierarchical prediction system in the neocortex comes from both human and animal work. However, direct evidence for top-down guided prediction errors in the neocortex that can be used for deep credit assignment during unsupervised learning remains limited. Here, we address this issue with 2-photon calcium imaging of layer 2/3 and layer 5 pyramidal neurons in the primary visual cortex of awake mice during passive exposure to visual stimuli where unexpected events occur. To assess the evidence for top-down guided prediction errors we recorded from both the somatic compartments, and the apical dendrites in layer 1, where a large number of top-down inputs are received. We find evidence for a diversity of prediction error signals depending on both the stimulus type and cell type. These signals can be learnt in some cases, and in turn, they appear to drive some learning. This data will help us to both understand hierarchical inference in the neocortex, and potentially guide new unsupervised techniques for machine learning. |
Blake Richards 🔗 |
Sat 5:15 p.m. - 6:00 p.m.
|
Panel Session: A new hope for neuroscience
(
panel
)
|
Yoshua Bengio · Blake Richards · Timothy Lillicrap · Ila Fiete · David Sussillo · Doina Precup · Konrad Kording · Surya Ganguli 🔗 |
Author Information
Guillaume Lajoie (Université de Montréal / Mila)
Eli Shlizerman (Departments of Applied Mathematics and Electrical & Computer Engineering, University of Washington Seattle)
Maximilian Puelma Touzel (Mila)
Jessica Thompson (Université de Montréal)
Konrad Kording (Upenn)
More from the Same Authors
-
2021 : Causality with Susan Athey, Konrad Kording, Amit Sharma »
Susan Athey · Konrad Kording · Amit Sharma · Emre Kiciman -
2022 : Inferring signatures of reinforcing ideology underlying carbon tax opposition »
Maximilian Puelma Touzel -
2023 Poster: Learning Time-Invariant Representations for Individual Neurons from Population Dynamics »
Lu Mi · Trung Le · Tianxing He · Eli Shlizerman · Uygar Sümbül -
2023 Poster: AMAG: Additive, Multiplicative and Adaptive Graph Neural Network For Forecasting Neuron Activity »
Jingyuan Li · Leo Scholl · Trung Le · Amy Orsborn · Eli Shlizerman -
2022 Poster: STNDT: Modeling Neural Population Activity with Spatiotemporal Transformers »
Trung Le · Eli Shlizerman -
2022 Poster: INRAS: Implicit Neural Representation for Audio Scenes »
Kun Su · Mingfei Chen · Eli Shlizerman -
2021 : Continual Learning In Environments With Polynomial Mixing Times »
Matthew Riemer · Sharath Chandra Raparthy · Ignacio Cases · Gopeshh Subbaraj · Maximilian Puelma Touzel · Irina Rish -
2021 : Live Q&A Session 1 with Yoshua Bengio, Leyla Isik, Konrad Kording, Bernhard Scholkopf, Amit Sharma, Joshua Vogelstein, Weiwei Yang »
Yoshua Bengio · Leyla Isik · Konrad Kording · Bernhard Schölkopf · Joshua T Vogelstein · Weiwei Yang -
2021 Poster: How Does it Sound? »
Kun Su · Xiulong Liu · Eli Shlizerman -
2020 Poster: Audeo: Audio Generation for a Silent Performance Video »
Kun Su · Xiulong Liu · Eli Shlizerman -
2019 : Panel Session: A new hope for neuroscience »
Yoshua Bengio · Blake Richards · Timothy Lillicrap · Ila Fiete · David Sussillo · Doina Precup · Konrad Kording · Surya Ganguli -
2019 : Poster Session »
Pravish Sainath · Mohamed Akrout · Charles Delahunt · Nathan Kutz · Guangyu Robert Yang · Joseph Marino · L F Abbott · Nicolas Vecoven · Damien Ernst · andrew warrington · Michael Kagan · Kyunghyun Cho · Kameron Harris · Leopold Grinberg · John J. Hopfield · Dmitry Krotov · Taliah Muhammad · Erick Cobos · Edgar Walker · Jacob Reimer · Andreas Tolias · Alexander Ecker · Janaki Sheth · Yu Zhang · Maciej Wołczyk · Jacek Tabor · Szymon Maszke · Roman Pogodin · Dane Corneil · Wulfram Gerstner · Baihan Lin · Guillermo Cecchi · Jenna M Reinen · Irina Rish · Guillaume Bellec · Darjan Salaj · Anand Subramoney · Wolfgang Maass · Yueqi Wang · Ari Pakman · Jin Hyung Lee · Liam Paninski · Bryan Tripp · Colin Graber · Alex Schwing · Luke Prince · Gabriel Ocker · Michael Buice · Benjamin Lansdell · Konrad Kording · Jack Lindsey · Terrence Sejnowski · Matthew Farrell · Eric Shea-Brown · Nicolas Farrugia · Victor Nepveu · Jiwoong Im · Kristin Branson · Brian Hu · Ramakrishnan Iyer · Stefan Mihalas · Sneha Aenugu · Hananel Hazan · Sihui Dai · Tan Nguyen · Doris Tsao · Richard Baraniuk · Anima Anandkumar · Hidenori Tanaka · Aran Nayebi · Stephen Baccus · Surya Ganguli · Dean Pospisil · Eilif Muller · Jeffrey S Cheng · Gaël Varoquaux · Kamalaker Dadi · Dimitrios C Gklezakos · Rajesh PN Rao · Anand Louis · Christos Papadimitriou · Santosh Vempala · Naganand Yadati · Daniel Zdeblick · Daniela M Witten · Nicholas Roberts · Vinay Prabhu · Pierre Bellec · Poornima Ramesh · Jakob H Macke · Santiago Cadena · Guillaume Bellec · Franz Scherr · Owen Marschall · Robert Kim · Hannes Rapp · Marcio Fonseca · Oliver Armitage · Jiwoong Im · Thomas Hardcastle · Abhishek Sharma · Wyeth Bair · Adrian Valente · Shane Shang · Merav Stern · Rutuja Patil · Peter Wang · Sruthi Gorantla · Peter Stratton · Tristan Edwards · Jialin Lu · Martin Ester · Yurii Vlasov · Siavash Golkar -
2019 : Opening Remarks »
Guillaume Lajoie · Jessica Thompson · Maximilian Puelma Touzel · Eli Shlizerman · Konrad Kording -
2019 Poster: Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics »
Giancarlo Kerg · Kyle Goyette · Maximilian Puelma Touzel · Gauthier Gidel · Eugene Vorontsov · Yoshua Bengio · Guillaume Lajoie -
2016 : Jessica Thompson - How can deep learning advance computational modeling of sensory information processing? »
Jessica Thompson