Skip to yearly menu bar Skip to main content


Poster

Relevant sparse codes with variational information bottleneck

Matthew Chalk · Olivier Marre · Gasper Tkacik

Area 5+6+7+8 #196

Keywords: [ (Cognitive/Neuroscience) Perception ] [ (Cognitive/Neuroscience) Neural Coding ] [ (Cognitive/Neuroscience) Reinforcement Learning ] [ (Cognitive/Neuroscience) Theoretical Neuroscience ] [ Sparsity and Feature Selection ] [ Information Theory ]


Abstract:

In many applications, it is desirable to extract only the relevant aspects of data. A principled way to do this is the information bottleneck (IB) method, where one seeks a code that maximises information about a relevance variable, Y, while constraining the information encoded about the original data, X. Unfortunately however, the IB method is computationally demanding when data are high-dimensional and/or non-gaussian. Here we propose an approximate variational scheme for maximising a lower bound on the IB objective, analogous to variational EM. Using this method, we derive an IB algorithm to recover features that are both relevant and sparse. Finally, we demonstrate how kernelised versions of the algorithm can be used to address a broad range of problems with non-linear relation between X and Y.

Live content is unavailable. Log in and register to view live content