NIPS 2006
Skip to yearly menu bar Skip to main content


Echo State Networks and Liquid State Machines

Herbert Jaeger · Wolfgang Maass · Jose C Principe

Black Tusk

A new approach to analyzing and training recurrent neural networks (RNNs) has emerged over the last few years. The central idea is to regard a sparsely connected recurrent circuit as a nonlinear, excitable medium, which is driven by input signals (possibly in conjunction with feedbacks from readouts). This recurrent circuit is --like a kernel in Support Vector Machine applications-- not adapted during learning. Rather, very simple (typically linear) readouts are trained to extract desired output signals. Despite its simplicity, it was recently shown that such simple networks have (in combination with feedback from readouts) universal computational power, both for digital and for analog computation. There are currently two main flavours of such networks. Echo state networks were developed from a mathematical and engineering background and are composed of simple sigmoid units, updated in discrete time. Liquid state machines were conceived from a mathematical and computational neuroscience perspective and usually are made of biologically more plausible, spiking neurons with a continuous-time dynamics. Generic cortical microcircuits are seen from this perspective as explicit implementations of kernels (in the sense of SVMs), that therefore are not required to carry out specific nonlinear computations (as long as their individual computations and representations are sufficiently diverse). Obviously this hypothesis provides a new perspective of neural coding, experimental design in neurobiology, and data analysis. This workshop will cover theoretical aspects of this approach, applications to concrete engineering tasks, as well as results of first neurobiological experiments that have tested predictions of this new model for cortical computation.

Live content is unavailable. Log in and register to view live content