Skip to yearly menu bar Skip to main content

Workshop: Information-Theoretic Principles in Cognitive Systems

Similarity-preserving Neural Networks from GPLVM and Information Theory

Yanis Bahroun · Atithi Acharya · Dmitri Chklovskii · Anirvan Sengupta


This work proposes a way of deriving the structure of plausible canonical microcircuit models, replete with feedforward, lateral, and feedback connections, out of information-theoretic considerations. The resulting circuits show biologically plausible features, such as being trainable online and having local synaptic update rules reminiscent of the Hebbian principle. Our work achieves these goals by rephrasing Gaussian Process Latent Variable Models as a special case of the more recently developed similarity matching framework. One remarkable aspect of the resulting network is the role of lateral interactions in preventing overfitting. Overall, our study emphasizes the importance of recurrent connections in neural networks, both for cognitive tasks in the brain and applications to artificial intelligence.

Chat is not available.