Timezone: »
In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion. We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units. We then propose the convolutional winner-take-all autoencoder which combines the benefits of convolutional architectures and autoencoders for learning shift-invariant sparse representations. We describe a way to train convolutional autoencoders layer by layer, where in addition to lifetime sparsity, a spatial sparsity within each feature map is achieved using winner-take-all activation functions. We will show that winner-take-all autoencoders can be used to to learn deep sparse representations from the MNIST, CIFAR-10, ImageNet, Street View House Numbers and Toronto Face datasets, and achieve competitive classification performance.
Author Information
Alireza Makhzani (University of Toronto)
Brendan J Frey (U. Toronto)
Brendan Frey is Co-Founder and CEO of Deep Genomics, a Co-Founder of the Vector Institute for Artificial Intelligence, and a Professor of Engineering and Medicine at the University of Toronto. He is internationally recognized as a leader in machine learning and in genome biology and his group has published over a dozen papers on these topics in Science, Nature and Cell. His work on using deep learning to identify protein-DNA interactions was recently highlighted on the front cover Nature Biotechnology (2015), while his work on deep learning dates back to an early paper on what are now called variational autoencoders (Science 1995). He is a Fellow of the Royal Society of Canada, a Fellow of the Institute for Electrical and Electronic Engineers, and a Fellow of the American Association for the Advancement of Science. He has consulted for several industrial research and development laboratories in Canada, the United States and England, and has served on the Technical Advisory Board of Microsoft Research.
More from the Same Authors
-
2021 : Few Shot Image Generation via Implicit Autoencoding of Support Sets »
Shenyang Huang · Kuan-Chieh Wang · Guillaume Rabusseau · Alireza Makhzani -
2021 : Your Dataset is a Multiset and You Should Compress it Like One »
Daniel Severo · James Townsend · Ashish Khisti · Alireza Makhzani · Karen Ullrich -
2021 : Your Dataset is a Multiset and You Should Compress it Like One »
Daniel Severo · James Townsend · Ashish Khisti · Alireza Makhzani · Karen Ullrich -
2021 Poster: Variational Model Inversion Attacks »
Kuan-Chieh Wang · YAN FU · Ke Li · Ashish Khisti · Richard Zemel · Alireza Makhzani -
2017 Poster: PixelGAN Autoencoders »
Alireza Makhzani · Brendan J Frey -
2017 Poster: Min-Max Propagation »
Christopher Srinivasa · Inmar Givoni · Siamak Ravanbakhsh · Brendan J Frey -
2017 Invited Talk: Why AI Will Make it Possible to Reprogram the Human Genome »
Brendan J Frey -
2015 : Learning Deep Biological Architectures for Genomic Medicine »
Brendan J Frey -
2015 Poster: Learning Wake-Sleep Recurrent Attention Models »
Jimmy Ba · Russ Salakhutdinov · Roger Grosse · Brendan J Frey -
2015 Spotlight: Learning Wake-Sleep Recurrent Attention Models »
Jimmy Ba · Russ Salakhutdinov · Roger Grosse · Brendan J Frey -
2012 Poster: Bayesian n-Choose-k Models for Classification and Ranking »
Kevin Swersky · Danny Tarlow · Richard Zemel · Ryan Adams · Brendan J Frey -
2008 Poster: Structured ranking learning using cumulative distribution networks »
Jim C Huang · Brendan J Frey