Timezone: »

Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons
Lars Buesing · Wolfgang Maass

Wed Dec 05 10:30 AM -- 10:40 AM (PST) @
We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived. This rule performs on common benchmark tasks as well as a rather complex rule that has previously been proposed \cite{KlampflETAL:07b}. Furthermore, the transparency of this new learning rule makes a theoretical analysis of its convergence properties feasible. A variation of this learning rule (with sign changes) provides a theoretically founded method for performing Principal Component Analysis {(PCA)} with spiking neurons. By applying this rule to an ensemble of neurons, different principal components of the input can be extracted. In addition, it is possible to preferentially extract those principal components from incoming signals $X$ that are related or are not related to some additional target signal $Y_T$. In a biological interpretation, this target signal $Y_T$ (also called relevance variable) could represent proprioceptive feedback, input from other sensory modalities, or top-down signals.

Author Information

Lars Buesing (Columbia University)
Wolfgang Maass (Graz University of Technology - IGI)

More from the Same Authors