Skip to yearly menu bar Skip to main content


Poster

Deep Learning without Weight Transport

Mohamed Akrout · Collin Wilson · Peter Humphreys · Timothy Lillicrap · Douglas Tweed

East Exhibition Hall B + C #102

Keywords: [ Neuroscience ] [ Neuroscience and Cognitive Science ] [ Deep Learning ] [ Biologically Plausible Deep Networks ]


Abstract:

Current algorithms for deep learning probably cannot run in the brain because they rely on weight transport, where forward-path neurons transmit their synaptic weights to a feedback path, in a way that is likely impossible biologically. An algorithm called feedback alignment achieves deep learning without weight transport by using random feedback weights, but it performs poorly on hard visual-recognition tasks. Here we describe two mechanisms — a neural circuit called a weight mirror and a modification of an algorithm proposed by Kolen and Pollack in 1994 — both of which let the feedback path learn appropriate synaptic weights quickly and accurately even in large networks, without weight transport or complex wiring. Tested on the ImageNet visual-recognition task, these mechanisms outperform both feedback alignment and the newer sign-symmetry method, and nearly match backprop, the standard algorithm of deep learning, which uses weight transport.

Live content is unavailable. Log in and register to view live content