Timezone: »
Humans and other animals are capable of improving their learning performance as they solve related tasks from a given problem domain, to the point of being able to learn from extremely limited data. While synaptic plasticity is generically thought to underlie learning in the brain, the precise neural and synaptic mechanisms by which learning processes improve through experience are not well understood. Here, we present a general-purpose, biologically-plausible meta-learning rule which estimates gradients with respect to the parameters of an underlying learning algorithm by simply running it twice. Our rule may be understood as a generalization of contrastive Hebbian learning to meta-learning and notably, it neither requires computing second derivatives nor going backwards in time, two characteristic features of previous gradient-based methods that are hard to conceive in physical neural circuits. We demonstrate the generality of our rule by applying it to two distinct models: a complex synapse with internal states which consolidate task-shared information, and a dual-system architecture in which a primary network is rapidly modulated by another one to learn the specifics of each task. For both models, our meta-learning rule matches or outperforms reference algorithms on a wide range of benchmark problems, while only using information presumed to be locally available at neurons and synapses. We corroborate these findings with a theoretical analysis of the gradient estimation error incurred by our rule.
Author Information
Nicolas Zucchet (ETH Zürich)
Simon Schug (ETH Zürich)
Johannes von Oswald (ETH Zurich)
Dominic Zhao (ETH Zurich)
João Sacramento (ETH Zurich)
More from the Same Authors
-
2020 : Meta-Learning via Hypernetworks »
Dominic Zhao -
2021 Spotlight: Credit Assignment in Neural Networks through Deep Feedback Control »
Alexander Meulemans · Matilde Tristany Farinha · Javier Garcia Ordonez · Pau Vilimelis Aceituno · João Sacramento · Benjamin F. Grewe -
2022 : Random initialisations performing above chance and how to find them »
Frederik Benzing · Simon Schug · Robert Meier · Johannes von Oswald · Yassir Akram · Nicolas Zucchet · Laurence Aitchison · Angelika Steger -
2023 Poster: Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis »
Alexander Meulemans · Simon Schug · Seijin Kobayashi · nathaniel daw · Gregory Wayne -
2023 Poster: Online learning of long-range dependencies »
Nicolas Zucchet · Robert Meier · Simon Schug · Asier Mujika · Joao Sacramento -
2022 : Panel »
Tyler Hayes · Tinne Tuytelaars · Subutai Ahmad · João Sacramento · Zsolt Kira · Hava Siegelmann · Christopher Summerfield -
2022 : Poster Session 1 »
Andrew Lowy · Thomas Bonnier · Yiling Xie · Guy Kornowski · Simon Schug · Seungyub Han · Nicolas Loizou · xinwei zhang · Laurent Condat · Tabea E. Röber · Si Yi Meng · Marco Mondelli · Runlong Zhou · Eshaan Nichani · Adrian Goldwaser · Rudrajit Das · Kayhan Behdin · Atish Agarwala · Mukul Gagrani · Gary Cheng · Tian Li · Haoran Sun · Hossein Taheri · Allen Liu · Siqi Zhang · Dmitrii Avdiukhin · Bradley Brown · Miaolan Xie · Junhyung Lyle Kim · Sharan Vaswani · Xinmeng Huang · Ganesh Ramachandra Kini · Angela Yuan · Weiqiang Zheng · Jiajin Li -
2022 Poster: The least-control principle for local learning at equilibrium »
Alexander Meulemans · Nicolas Zucchet · Seijin Kobayashi · Johannes von Oswald · João Sacramento -
2022 Poster: Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel »
Seijin Kobayashi · Pau Vilimelis Aceituno · Johannes von Oswald -
2021 Poster: Credit Assignment in Neural Networks through Deep Feedback Control »
Alexander Meulemans · Matilde Tristany Farinha · Javier Garcia Ordonez · Pau Vilimelis Aceituno · João Sacramento · Benjamin F. Grewe -
2021 Poster: Posterior Meta-Replay for Continual Learning »
Christian Henning · Maria Cervera · Francesco D'Angelo · Johannes von Oswald · Regina Traber · Benjamin Ehret · Seijin Kobayashi · Benjamin F. Grewe · João Sacramento -
2021 Poster: Learning where to learn: Gradient sparsity in meta and continual learning »
Johannes von Oswald · Dominic Zhao · Seijin Kobayashi · Simon Schug · Massimo Caccia · Nicolas Zucchet · João Sacramento -
2020 Poster: A Theoretical Framework for Target Propagation »
Alexander Meulemans · Francesco Carzaniga · Johan Suykens · João Sacramento · Benjamin F. Grewe -
2020 Spotlight: A Theoretical Framework for Target Propagation »
Alexander Meulemans · Francesco Carzaniga · Johan Suykens · João Sacramento · Benjamin F. Grewe