`

Timezone: »

 
Poster
Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks
Roman Pogodin · Peter E Latham

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1565

The state-of-the art machine learning approach to training deep neural networks, backpropagation, is implausible for real neural networks: neurons need to know their outgoing weights; training alternates between a bottom-up forward pass (computation) and a top-down backward pass (learning); and the algorithm often needs precise labels of many data points. Biologically plausible approximations to backpropagation, such as feedback alignment, solve the weight transport problem, but not the other two. Thus, fully biologically plausible learning rules have so far remained elusive. Here we present a family of learning rules that does not suffer from any of these problems. It is motivated by the information bottleneck principle (extended with kernel methods), in which networks learn to compress the input as much as possible without sacrificing prediction of the output. The resulting rules have a 3-factor Hebbian structure: they require pre- and post-synaptic firing rates and an error signal - the third factor - consisting of a global teaching signal and a layer-specific term, both available without a top-down pass. They do not require precise labels; instead, they rely on the similarity between pairs of desired outputs. Moreover, to obtain good performance on hard problems and retain biological plausibility, our rules need divisive normalization - a known feature of biological networks. Finally, simulations show that our rules perform nearly as well as backpropagation on image classification tasks.

Author Information

Roman Pogodin (Gatsby Unit, University College London)
Peter E Latham (Gatsby Unit, UCL)

More from the Same Authors

  • 2021 Poster: Powerpropagation: A sparsity inducing weight reparameterisation »
    Jonathan Schwarz · Sid M Jayakumar · Razvan Pascanu · Peter E Latham · Yee Teh
  • 2021 Poster: Self-Supervised Learning with Kernel Dependence Maximization »
    Yazhe Li · Roman Pogodin · Danica J Sutherland · Arthur Gretton
  • 2021 Poster: Towards Biologically Plausible Convolutional Networks »
    Roman Pogodin · Yash Mehta · Timothy Lillicrap · Peter E Latham
  • 2019 : Poster Session »
    Pravish Sainath · Mohamed Akrout · Charles Delahunt · Nathan Kutz · Guangyu Robert Yang · Joe Marino · L F Abbott · Nicolas Vecoven · Damien Ernst · andrew warrington · Michael Kagan · Kyunghyun Cho · Kameron Harris · Leopold Grinberg · John J. Hopfield · Dmitry Krotov · Taliah Muhammad · Erick Cobos · Edgar Walker · Jacob Reimer · Andreas Tolias · Alexander Ecker · Janaki Sheth · Yu Zhang · Maciej Wołczyk · Jacek Tabor · Szymon Maszke · Roman Pogodin · Dane Corneil · Wulfram Gerstner · Baihan Lin · Guillermo Cecchi · Jenna M Reinen · Irina Rish · Guillaume Bellec · Darjan Salaj · Anand Subramoney · Wolfgang Maass · Yueqi Wang · Ari Pakman · Jin Hyung Lee · Liam Paninski · Bryan Tripp · Colin Graber · Alex Schwing · Luke Prince · Gabriel Ocker · Michael Buice · Ben Lansdell · Konrad Kording · Jack Lindsey · Terrence Sejnowski · Matthew Farrell · Eric Shea-Brown · Nicolas Farrugia · Victor Nepveu · Daniel Im · Kristin Branson · Brian Hu · Ram Iyer · Stefan Mihalas · Sneha Aenugu · Hananel Hazan · Sophie Dai · Tan Nguyen · Ying Tsao · Richard Baraniuk · Anima Anandkumar · Hidenori Tanaka · Aran Nayebi · Stephen Baccus · Surya Ganguli · Dean Pospisil · Eilif Muller · Jeffrey S Cheng · Gaël Varoquaux · Kamalaker Dadi · Dimitrios C Gklezakos · Rajesh PN Rao · Anand Louis · Christos Papadimitriou · Santosh Vempala · Naganand Yadati · Daniel Zdeblick · Daniela M Witten · Nick Roberts · Vinay Prabhu · Pierre Bellec · Poornima Ramesh · Jakob H Macke · Santiago Cadena · Guillaume Bellec · Franz Scherr · Owen Marschall · Robert Kim · Hannes Rapp · Marcio Fonseca · Oliver Armitage · Jiwoong Im · Thomas Hardcastle · Abhishek Sharma · Wyeth Bair · Adrian Valente · Shane Shang · Merav Stern · Rutuja Patil · Peter Wang · Sruthi Gorantla · Peter Stratton · Tristan Edwards · Jialin Lu · Martin Ester · Yurii Vlasov · Siavash Golkar
  • 2013 Poster: Demixing odors - fast inference in olfaction »
    Agnieszka Grabska-Barwinska · Jeff Beck · Alexandre Pouget · Peter E Latham
  • 2013 Spotlight: Demixing odors - fast inference in olfaction »
    Agnieszka Grabska-Barwinska · Jeff Beck · Alexandre Pouget · Peter E Latham
  • 2011 Poster: How biased are maximum entropy models? »
    Jakob H Macke · Iain Murray · Peter E Latham
  • 2007 Oral: Neural characterization in partially observed populations of spiking neurons »
    Jonathan W Pillow · Peter E Latham
  • 2007 Poster: Neural characterization in partially observed populations of spiking neurons »
    Jonathan W Pillow · Peter E Latham