Timezone: »

Deep Networks with Internal Selective Attention through Feedback Connections
Marijn F Stollenga · Jonathan Masci · Faustino Gomez · Jürgen Schmidhuber

Mon Dec 08 04:00 PM -- 08:59 PM (PST) @ Level 2, room 210D #None

Traditional convolutional neural networks (CNN) are stationary and feedforward. They neither change their parameters during evaluation nor use feedback from higher to lower layers. Real brains, however, do. So does our Deep Attention Selective Network (dasNet) architecture. DasNet's feedback structure can dynamically alter its convolutional filter sensitivities during classification. It harnesses the power of sequential processing to improve classification performance, by allowing the network to iteratively focus its internal attention on some of its convolutional filters. Feedback is trained through direct policy search in a huge million-dimensional parameter space, through scalable natural evolution strategies (SNES). On the CIFAR-10 and CIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model on unaugmented datasets.

Author Information

Marijn F Stollenga (TUM)
Jonathan Masci (Università della Svizzera italiana)
Faustino Gomez (IDSIA)
Jürgen Schmidhuber (Swiss AI Lab, IDSIA (USI & SUPSI) - NNAISENSE)

Since age 15, his main goal has been to build an Artificial Intelligence smarter than himself, then retire. The Deep Learning Artificial Neural Networks developed since 1991 by his research groups have revolutionised handwriting recognition, speech recognition, machine translation, image captioning, and are now available to billions of users through Google, Microsoft, IBM, Baidu, and many other companies (DeepMind also was heavily influenced by his lab). His team's Deep Learners were the first to win object detection and image segmentation contests, and achieved the world's first superhuman visual classification results, winning nine international competitions in machine learning & pattern recognition. His formal theory of fun & creativity & curiosity explains art, science, music, and humor. He has published 333 papers, earned 7 best paper/best video awards, the 2013 Helmholtz Award of the International Neural Networks Society, and the 2016 IEEE Neural Networks Pioneer Award. He is also president of NNAISENSE, which aims at building the first practical general purpose AI.

More from the Same Authors