Timezone: »
The success of deep learning sparked interest in whether the brain learns by using similar techniques for assigning credit to each synaptic weight for its contribution to the network output. However, the majority of current attempts at biologically-plausible learning methods are either non-local in time, require highly specific connectivity motifs, or have no clear link to any known mathematical optimization method. Here, we introduce Deep Feedback Control (DFC), a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment. The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of feedback connectivity patterns. To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing. By combining dynamical system theory with mathematical optimization theory, we provide a strong theoretical foundation for DFC that we corroborate with detailed results on toy experiments and standard computer-vision benchmarks.
Author Information
Alexander Meulemans (ETH Zürich | University of Zürich | Institute of Neuroinformatics)
Matilde Tristany Farinha (Swiss Federal Institute of Technology)
Javier Garcia Ordonez (Swiss Federal Institute of Technology)
Pau Vilimelis Aceituno (Insititute of Neuroinformatics, University of Zurich and ETH Zurich, Swiss Federal Institute of Technology)
João Sacramento (ETH Zurich)
Benjamin F. Grewe (ETH Zurich)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Credit Assignment in Neural Networks through Deep Feedback Control »
Dates n/a. Room
More from the Same Authors
-
2021 : Uncertainty estimation under model misspecification in neural network regression »
Maria Cervera · Rafael Dätwyler · Francesco D'Angelo · Hamza Keurti · Benjamin F. Grewe · Christian Henning -
2022 : Homomorphism AutoEncoder --- Learning Group Structured Representations from Observed Transitions »
Hamza Keurti · Hsiao-Ru Pan · Michel Besserve · Benjamin F. Grewe · Bernhard Schölkopf -
2022 : Meta-Learning via Classifier(-free) Guidance »
Elvis Nava · Seijin Kobayashi · Yifei Yin · Robert Katzschmann · Benjamin F. Grewe -
2022 : Panel »
Tyler Hayes · Tinne Tuytelaars · Subutai Ahmad · João Sacramento · Zsolt Kira · Hava Siegelmann · Christopher Summerfield -
2022 : Homomorphism AutoEncoder --- Learning Group Structured Representations from Observed Transitions »
Hamza Keurti · Hsiao-Ru Pan · Michel Besserve · Benjamin F. Grewe · Bernhard Schölkopf -
2022 Poster: A contrastive rule for meta-learning »
Nicolas Zucchet · Simon Schug · Johannes von Oswald · Dominic Zhao · João Sacramento -
2022 Poster: The least-control principle for local learning at equilibrium »
Alexander Meulemans · Nicolas Zucchet · Seijin Kobayashi · Johannes von Oswald · João Sacramento -
2022 Poster: Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel »
Seijin Kobayashi · Pau Vilimelis Aceituno · Johannes von Oswald -
2021 Poster: Posterior Meta-Replay for Continual Learning »
Christian Henning · Maria Cervera · Francesco D'Angelo · Johannes von Oswald · Regina Traber · Benjamin Ehret · Seijin Kobayashi · Benjamin F. Grewe · João Sacramento -
2021 Poster: Learning where to learn: Gradient sparsity in meta and continual learning »
Johannes von Oswald · Dominic Zhao · Seijin Kobayashi · Simon Schug · Massimo Caccia · Nicolas Zucchet · João Sacramento -
2020 Poster: A Theoretical Framework for Target Propagation »
Alexander Meulemans · Francesco Carzaniga · Johan Suykens · João Sacramento · Benjamin F. Grewe -
2020 Spotlight: A Theoretical Framework for Target Propagation »
Alexander Meulemans · Francesco Carzaniga · Johan Suykens · João Sacramento · Benjamin F. Grewe