The brain remains the only known example of a truly general-purpose intelligent system. The study of human and animal cognition has revealed key insights, such as the ideas of parallel distributed processing, biological vision, and learning from reward signals, that have heavily influenced the design of artificial learning systems. Many AI researchers continue to look to neuroscience as a source of inspiration and insight. A key difficulty is that neuroscience is a vast and heterogeneous area of study, encompassing a bewildering array of subfields. In this tutorial, we will seek to provide both a broad overview of neuroscience as a whole, as well as a focused look at two areas -- computational cognitive neuroscience and the neuroscience of learning in circuits -- that we believe are particularly relevant for AI researchers today. We will conclude by highlighting several ongoing lines of work that seek to import insights from these areas of neuroscience into AI, and vice versa.
Jane Wang (DeepMind)
Jane Wang is a research scientist at DeepMind on the neuroscience team, working on meta-reinforcement learning and neuroscience-inspired artificial agents. Her background is in physics, complex systems, and computational and cognitive neuroscience.
Kevin Miller (DeepMind and University College London)
Kevin Miller is a research scientist on the Neuroscience Team at DeepMind and a postdoc at University College London. He is currently working on understanding structured reinforcement learning in mice and machines.
Adam Marblestone (DeepMind)
Adam Marblestone is a Schmidt Futures innovation fellow, was previously a research scientist at DeepMind, and earlier did a PhD in BioPhysics and worked at a brain computer interface company.
More from the Same Authors
2021 : Alchemy: A benchmark and analysis toolkit for meta-reinforcement learning agents »
Jane Wang · Michael King · Nicolas Porcel · Zeb Kurth-Nelson · Tina Zhu · Charles Deck · Peter Choy · Mary Cassin · Malcolm Reynolds · Francis Song · Gavin Buttimore · David Reichert · Neil Rabinowitz · Loic Matthey · Demis Hassabis · Alexander Lerchner · Matt Botvinick
2021 : Continual with Sujeeth Bharadwaj, Gabriel Silva, Eric Traut, Jane Wang »
Sujeeth Bharadwaj · Jane Wang · Weiwei Yang
2023 Poster: Meta-in-context learning in large language models »
Julian Coda-Forno · Marcel Binz · Zeynep Akata · Matt Botvinick · Jane Wang · Eric Schulz
2023 Poster: Passive learning of active causal strategies in agents and language models »
Andrew Lampinen · Stephanie Chan · Ishita Dasgupta · Andrew Nam · Jane Wang
2022 Poster: Data Distributional Properties Drive Emergent In-Context Learning in Transformers »
Stephanie Chan · Adam Santoro · Andrew Lampinen · Jane Wang · Aaditya Singh · Pierre Richemond · James McClelland · Felix Hill
2022 Poster: Semantic Exploration from Language Abstractions and Pretrained Representations »
Allison Tam · Neil Rabinowitz · Andrew Lampinen · Nicholas Roy · Stephanie Chan · DJ Strouse · Jane Wang · Andrea Banino · Felix Hill
2021 : Live Q&A Session 2 with Susan Athey, Yoshua Bengio, Sujeeth Bharadwaj, Jane Wang, Joshua Vogelstein, Weiwei Yang »
Susan Athey · Yoshua Bengio · Sujeeth Bharadwaj · Jane Wang · Weiwei Yang · Joshua T Vogelstein
2020 : Introduction for invited speaker, Frank Hutter »
2020 Workshop: Meta-Learning »
Jane Wang · Joaquin Vanschoren · Erin Grant · Jonathan Richard Schwarz · Francesco Visin · Jeff Clune · Roberto Calandra
2019 : Panel Discussion led by Grace Lindsay »
Grace Lindsay · Blake Richards · Doina Precup · Jacqueline Gottlieb · Jeff Clune · Jane Wang · Richard Sutton · Angela Yu · Ida Momennejad
2019 : Invited Talk #1: From brains to agents and back »
2019 Workshop: Meta-Learning »
Roberto Calandra · Ignasi Clavera Gilaberte · Frank Hutter · Joaquin Vanschoren · Jane Wang