Timezone: »
Training with backpropagation (BP) in standard deep learning consists of two main steps: a forward pass that maps a data point to its prediction, and a backward pass that propagates the error of this prediction back through the network. This process is highly effective when the goal is to minimize a specific objective function. However, it does not allow training on networks with cyclic or backward connections. This is an obstacle to reaching brain-like capabilities, as the highly complex heterarchical structure of the neural connections in the neocortex are potentially fundamental for its effectiveness. In this paper, we show how predictive coding (PC), a theory of information processing in the cortex, can be used to perform inference and learning on arbitrary graph topologies. We experimentally show how this formulation, called PC graphs, can be used to flexibly perform different tasks with the same network by simply stimulating specific neurons. This enables the model to be queried on stimuli with different structures, such as partial images, images with labels, or images without labels. We conclude by investigating how the topology of the graph influences the final performance, and comparing against simple baselines trained with BP.
Author Information
Tommaso Salvatori (University of Oxford)
Luca Pinchetti (University of Oxford)
Beren Millidge (University of Edinburgh)
Yuhang Song (University of Oxford)
Tianyi Bao (University of Oxford)
Rafal Bogacz (University of Oxford)
Thomas Lukasiewicz (University of Oxford)
More from the Same Authors
-
2021 : Few-Shot Out-of-Domain Transfer of Natural Language Explanations »
Yordan Yordanov · Vid Kocijan · Thomas Lukasiewicz · Oana M Camburu -
2021 : Few-Shot Out-of-Domain Transfer of Natural Language Explanations »
Yordan Yordanov · Vid Kocijan · Thomas Lukasiewicz · Oana M Camburu -
2022 : Associative memory via covariance-learning predictive coding networks »
Mufeng Tang · Tommaso Salvatori · Yuhang Song · Beren Millidge · Thomas Lukasiewicz · Rafal Bogacz -
2022 : Generalized Predictive Coding: Bayesian Inference in Static and Dynamic models »
AndrĂ© Ofner · Beren Millidge · Sebastian Stober -
2022 Spotlight: Predictive Coding beyond Gaussian Distributions »
Luca Pinchetti · Tommaso Salvatori · Yordan Yordanov · Beren Millidge · Yuhang Song · Thomas Lukasiewicz -
2022 Spotlight: Lightning Talks 1B-1 »
Qitian Wu · Runlin Lei · Rongqin Chen · Luca Pinchetti · Yangze Zhou · Abhinav Kumar · Hans Hao-Hsun Hsu · Wentao Zhao · Chenhao Tan · Zhen Wang · Shenghui Zhang · Yuesong Shen · Tommaso Salvatori · Gitta Kutyniok · Zenan Li · Amit Sharma · Leong Hou U · Yordan Yordanov · Christian Tomani · Bruno Ribeiro · Yaliang Li · David P Wipf · Daniel Cremers · Bolin Ding · Beren Millidge · Ye Li · Yuhang Song · Junchi Yan · Zhewei Wei · Thomas Lukasiewicz -
2022 Poster: Predictive Coding beyond Gaussian Distributions »
Luca Pinchetti · Tommaso Salvatori · Yordan Yordanov · Beren Millidge · Yuhang Song · Thomas Lukasiewicz -
2021 Poster: Associative Memories via Predictive Coding »
Tommaso Salvatori · Yuhang Song · Yujian Hong · Lei Sha · Simon Frieder · Zhenghua Xu · Rafal Bogacz · Thomas Lukasiewicz -
2020 Poster: Lightweight Generative Adversarial Networks for Text-Guided Image Manipulation »
Bowen Li · Xiaojuan Qi · Philip Torr · Thomas Lukasiewicz -
2020 Poster: Coherent Hierarchical Multi-Label Classification Networks »
Eleonora Giunchiglia · Thomas Lukasiewicz -
2020 Poster: BoxE: A Box Embedding Model for Knowledge Base Completion »
Ralph Abboud · Ismail Ceylan · Thomas Lukasiewicz · Tommaso Salvatori -
2020 Spotlight: BoxE: A Box Embedding Model for Knowledge Base Completion »
Ralph Abboud · Ismail Ceylan · Thomas Lukasiewicz · Tommaso Salvatori -
2020 Poster: Can the Brain Do Backpropagation? --- Exact Implementation of Backpropagation in Predictive Coding Networks »
Yuhang Song · Thomas Lukasiewicz · Zhenghua Xu · Rafal Bogacz -
2019 Poster: Controllable Text-to-Image Generation »
Bowen Li · Xiaojuan Qi · Thomas Lukasiewicz · Philip Torr -
2018 Poster: e-SNLI: Natural Language Inference with Natural Language Explanations »
Oana-Maria Camburu · Tim Rocktäschel · Thomas Lukasiewicz · Phil Blunsom