Skip to yearly menu bar Skip to main content


Poster

Deep Networks with Internal Selective Attention through Feedback Connections

Marijn F Stollenga · Jonathan Masci · Faustino Gomez · Jürgen Schmidhuber

Level 2, room 210D

Abstract:

Traditional convolutional neural networks (CNN) are stationary and feedforward. They neither change their parameters during evaluation nor use feedback from higher to lower layers. Real brains, however, do. So does our Deep Attention Selective Network (dasNet) architecture. DasNet's feedback structure can dynamically alter its convolutional filter sensitivities during classification. It harnesses the power of sequential processing to improve classification performance, by allowing the network to iteratively focus its internal attention on some of its convolutional filters. Feedback is trained through direct policy search in a huge million-dimensional parameter space, through scalable natural evolution strategies (SNES). On the CIFAR-10 and CIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model on unaugmented datasets.

Live content is unavailable. Log in and register to view live content