Timezone: »

Towards Biologically Plausible Convolutional Networks
Roman Pogodin · Yash Mehta · Timothy Lillicrap · Peter E Latham

Tue Dec 07 04:30 PM -- 06:00 PM (PST) @

Convolutional networks are ubiquitous in deep learning. They are particularly useful for images, as they reduce the number of parameters, reduce training time, and increase accuracy. However, as a model of the brain they are seriously problematic, since they require weight sharing - something real neurons simply cannot do. Consequently, while neurons in the brain can be locally connected (one of the features of convolutional networks), they cannot be convolutional. Locally connected but non-convolutional networks, however, significantly underperform convolutional ones. This is troublesome for studies that use convolutional networks to explain activity in the visual system. Here we study plausible alternatives to weight sharing that aim at the same regularization principle, which is to make each neuron within a pool react similarly to identical inputs. The most natural way to do that is by showing the network multiple translations of the same image, akin to saccades in animal vision. However, this approach requires many translations, and doesn't remove the performance gap. We propose instead to add lateral connectivity to a locally connected network, and allow learning via Hebbian plasticity. This requires the network to pause occasionally for a sleep-like phase of "weight sharing". This method enables locally connected networks to achieve nearly convolutional performance on ImageNet and improves their fit to the ventral stream data, thus supporting convolutional networks as a model of the visual stream.

Author Information

Roman Pogodin (Gatsby Unit, University College London)
Yash Mehta (Albert Ludwigs University of Freiburg)

Hi! I’m currently a research engineer working on challenging neural architecture search research under the supervision of Prof **Frank Hutter** (ELLIS Fellow). Previously, I was a researcher at the *Gatsby Computational Neuroscience Unit* at UCL, where I was working on evaluating biologically plausible perturbation-based learning algorithms to train deep networks under the guidance of **Prof Peter Latham** (Gatsby) and **Tim Lillicrap** (DeepMind). In the past, I’ve also worked on deep learning-based personality detection from text with **Prof Erik Cambria** (NTU Singapore). I thoroughly enjoy coding and working on hard algorithmic problems.

Timothy Lillicrap (DeepMind & UCL)
Peter E Latham (Gatsby Unit, UCL)

More from the Same Authors