Timezone: »

 
Workshop
Computing with Spikes
Sander M Bohte · Thomas Nowotny · Cristina Savin · Davide Zambrano

Fri Dec 09 11:00 PM -- 09:30 AM (PST) @ Room 122 + 123
Event URL: https://www.cwi.nl/computing-spikes-nips-2016-workshop »

Despite remarkable computational success, artificial neural networks ignore the spiking nature of neural communication that is fundamental for biological neuronal networks. Understanding how spiking neurons process information and learn remains an essential challenge. It concerns not only neuroscientists studying brain function, but also neuromorphic engineers developing low-power computing architectures, or machine learning researchers devising new biologically-inspired learning algorithms. Unfortunately, despite a joint interest in spike-based computation, the interactions between these subfields remains limited. The workshop aims to bring them together and to foster the exchange between them by focusing on recent developments in efficient neural coding and spiking neurons' computation. The discussion will center around critical questions in the field, such as "what are the underlying paradigms?" "what are the fundamental constraints?", and "what are the measures for progress?”, that benefit from varied perspectives. The workshop will combine invited talks reviewing the state-of-the-art and short contributed presentations; it will conclude with a panel discussion.

Fri 11:50 p.m. - 12:00 a.m.
Workshop opening (Opening)
Sat 12:00 a.m. - 12:30 a.m.

It is very difficult to construct by hand recurrent networks of noisy spiking neurons that are able to carry out nontrivial computational tasks. Obviously evolution has found a different strategy for that. Therefore we have analyzed the power of reward-based learning for configuring the connections and parameters (synaptic weights) of such a network. More specifically, we have considered a model where stochastic local plasticity rules drive the network to search for highly rewarded network configurations. On the abstract level, the resulting paradigm provides an interesting alternative to classical policy learning through gradient ascent: A continuous policy search through stochastic sampling from a posterior distribution that integrates structural constraints with reward expectations.

Wolfgang Maass
Sat 12:30 a.m. - 1:00 a.m.
Robotic Vision with Dynamic Vision Sensors (Talk)
Tobi Delbruck
Sat 1:00 a.m. - 1:30 a.m.

Theory and Tools for the Conversion of Analog to Spiking Convolutional Neural Networks Bodo Rueckauer, Iulia-Alexandra Lungu, Yuhuang Hu, and Michael Pfeiffer

Fast and Efficient Asynchronous Neural Computation in Deep Adaptive Spiking Neural Networks Davide Zambrano and Sander Bohte

A wake-sleep algorithm for recurrent, spiking neural networks Johannes Thiele, Peter Diehl and Matthew Cook

Deep counter networks for asynchronous event-based processing Jonathan Binas, Giacomo Indiveri and Michael Pfeiffer

Spike-based reinforcement learning for temporal stimulus segmentation and decision making Luisa Le Donne, Luca Mazzucato, Robert Urbanczik, Walter Senn and Giancarlo La Camera

Sat 1:00 a.m. - 1:05 a.m.
Theory and Tools for the Conversion of Analog to Spiking Convolutional Neural Networks (Spotlight)
Bodo Rueckauer
Sat 1:05 a.m. - 1:10 a.m.
Fast and Efficient Asynchronous Neural Computation in Deep Adaptive Spiking Neural Networks (Spotlight)
Davide Zambrano
Sat 1:10 a.m. - 1:15 a.m.

Johannes Thiele, Peter Diehl and Matthew Cook

Sat 1:15 a.m. - 1:20 a.m.
Deep counter networks for asynchronous event-based processing (Spotlight)
Jonathan Binas
Sat 1:20 a.m. - 1:25 a.m.
Spike-based reinforcement learning for temporal stimulus segmentation and decision making (Spotlight)
Giancarlo La Camera
Sat 1:30 a.m. - 2:00 a.m.
Coffee break and Posters (Break)
Sat 2:00 a.m. - 2:30 a.m.

Deep learning has made great strides in the last few years. For example, it is now possible to train networks with millions of neurons--using gradient-based learning methods--to classify images at near human performance. One exciting possibility is to run these networks on energy-efficient neuromorphic hardware, such as IBM's TrueNorth chip. However, these specialized architectures impose constraints that are not typically considered in deep learning; for example to achieve energy efficiency, TrueNorth uses low precision synapses, spiking neurons, and restricted fan-in. In this talk, I will describe our recent work that modifies deep learning to be compatible with typical neuromorphic constraints. Using this approach, we demonstrate near state-of-the-art accuracy on 8 datasets, while running between 1,200 and 2,600 frames per second and using between 25mW and 275mW on TrueNorth.

Paul Merolla
Sat 2:30 a.m. - 2:50 a.m.

Deep Spiking Networks Peter O’Connor and Max Welling 

Optimization-based computation with spiking neurons Stephen Verzi, Craig Vineyard, Eric Vugrin, Meghan Galiardi, Conrad James and James Aimone

Towards deep learning with spiking neurons in energy based models with contrastive Hebbian plasticity Thomas Mesnard, Wulfram Gerstner and Johanni Brea

Can we be formal in assessing the strengths and weaknesses of neural architectures? A case study using a spiking cross-correlation algorithm William Severa, Kristofor Carlson, Ojas Parekh, Craig Vineyard and James Aimone

Sat 2:30 a.m. - 2:35 a.m.
Deep Spiking Networks (Spotlight)
Peter O'Connor
Sat 2:35 a.m. - 2:40 a.m.
Optimization-based computation with spiking neurons (Spotlight)
Stephen Verzi
Sat 2:40 a.m. - 2:45 a.m.
Towards deep learning with spiking neurons in energy based models with contrastive Hebbian plasticity (Spotlight)
Johanni Brea
Sat 2:45 a.m. - 2:50 a.m.
Can we be formal in assessing the strengths and weaknesses of neural architectures? A case study using a spiking cross-correlation algorithm (Spotlight)
William Severa
Sat 2:50 a.m. - 3:30 a.m.

Storage capacity of spatio-temporal patterns in LIF spiking networks: mixed rate and phase coding Antonio de Candia and Siliva Scarpetta,

Theory and Tools for the Conversion of Analog to Spiking Convolutional Neural Networks Bodo Rueckauer, Iulia-Alexandra Lungu, Yuhuang Hu, and Michael Pfeiffer

Somatic inhibition controls dendritic selectivity in a 2 sparse coding network of spiking neurons. Damien Drix

Fast and Efficient Asynchronous Neural Computation in Deep Adaptive Spiking Neural Networks Davide Zambrano and Sander Bohte

Spiking memristor logic gates are a type of time-variant perceptron. Ella Gale. 

A wake-sleep algorithm for recurrent, spiking neural networks Johannes Thiele, Peter Diehl and Matthew Cook

Deep counter networks for asynchronous event-based processing Jonathan Binas, Giacomo Indiveri and Michael Pfeiffer

Spike-based reinforcement learning for temporal stimulus segmentation and decision making Luisa Le Donne, Luca Mazzucato, Robert Urbanczik, Walter Senn and Giancarlo La Camera

Deep Spiking Networks Peter O’Connor and Max Welling 

Working Memory in Adaptive Spiking Neural Networks Roeland Nusselder, Davide Zambrano and Sander Bohte

An Efficient Approach to Boosting Performance of Deep Spiking Network Training Seongsik Park, Sung-gil Lee, Huynha Nam and Sungroh Yoon. 

Optimization-based computation with spiking neurons Stephen Verzi, Craig Vineyard, Eric Vugrin, Meghan Galiardi, Conrad James and James Aimone

Learning binary or real-valued time-series via spike-timing dependent plasticity  Takayuki Osogami 

Towards deep learning with spiking neurons in energy based models with contrastive Hebbian plasticity Thomas Mesnard, Wulfram Gerstner and Johanni Brea

Can we be formal in assessing the strengths and weaknesses of neural architectures? A case study using a spiking cross-correlation algorithm William Severa, Kristofor Carlson, Ojas Parekh, Craig Vineyard and James Aimone

Nonnegative autoencoder with simplified random neural network Yonghua Yin and Erol Gelenbe

Sat 3:30 a.m. - 5:00 a.m.
Lunch (Break)
Sat 5:00 a.m. - 5:30 a.m.

Biological neurons communicate with a sparing exchange of pulses – spikes. It is an open question how real spiking neurons produce the kind of powerful neural computation that is possible with deep artificial neural networks, using only so very few spikes to communicate. Building on recent insights in neuroscience, we present an Adapting Spiking Neural Network (ASNN) based on adaptive spiking neurons. These spiking neurons efficiently encode information in spike-trains using a form of Asynchronous Pulsed Sigma-Delta coding while homeostatically optimizing their firing rate. In the proposed paradigm of spiking neuron computation, neural adaptation is tightly coupled to synaptic plasticity, to ensure that downstream neurons can correctly decode upstream spiking neurons. We show that this type of network is inherently able to carry out asynchronous and event-driven neural computation, while performing identical to corresponding artificial neural networks (ANNs). In particular, we show that these adaptive spiking neurons can be drop in replacements for ReLU neurons in standard feedforward ANNs comprised of such units. We demonstrate that this can also be successfully applied to a ReLU based deep convolutional neural network for classifying the MNIST dataset. The ASNN thus outperforms current Spiking Neural Networks (SNNs) implementations, while responding (up to) an order of magnitude faster and using an order of magnitude fewer spikes. Additionally, in a streaming setting where frames are continuously classified, we show that the ASNN requires substantially fewer network updates as compared to the corresponding ANN.

Sander M Bohte
Sat 5:30 a.m. - 6:00 a.m.

Given the rapidly growing interest in neuromorphics and spike-based computation, there are a wide range of techniques, software frameworks, and hardware implementations that explore these ideas.  We have been integrating some of these approaches into a common software toolkit, Nengo, which provides a high-level programming interface for the specification of spike-based neural networks, and then compiles these models to target different hardware, including CPUs, GPUs, digital neuromorphics, and analog neuromorphics.  We will discuss some of the challenges involved in compiling to such a wide range of hardware, and show examples of efficiency gains both for neuroscientific modelling of large-scale biological systems and for modern machine-learning algorithms such as deep networks.

Terrence C Stewart
Sat 6:30 a.m. - 7:00 a.m.

Luis Plana: The SpiNNaker machine supports large-scale spiking neural networks that operate in biological real time with up to hundreds of million of neurons and hundreds of billions of synapses. So far demonstrations of the machine’s capabilities have been modest in scale, such as small-scale cortical microcolumn models and a stochastic spiking network that solves “diobolical” Sudoku problems, but the platform is now openly available under the auspices of the EU Flagship Human Brain Project, and we look forward to much larger, more challenging demonstrations over the next year or two!

Luis Plana
Sat 7:00 a.m. - 7:30 a.m.
Spike-based probabilistic computation (Talk) Cristina Savin
Sat 7:30 a.m. - 8:00 a.m.
Panel Discussion (Discussion Panel)

Author Information

Sander M Bohte (Centrum Wiskunde Informatica)
Thomas Nowotny (University of Sussex)
Cristina Savin (IST Austria, NYU)
Davide Zambrano (Centrum Wiskunde & Informatica (CWI))

More from the Same Authors