Skip to yearly menu bar Skip to main content


Poster

Towards Learning Convolutions from Scratch

Behnam Neyshabur

Poster Session 0 #136

Abstract:

Convolution is one of the most essential components of modern architectures used in computer vision. As machine learning moves towards reducing the expert bias and learning it from data, a natural next step seems to be learning convolution-like structures from scratch. This, however, has proven elusive. For example, current state-of-the-art architecture search algorithms use convolution as one of the existing modules rather than learning it from data. In an attempt to understand the inductive bias that gives rise to convolutions, we investigate minimum description length as a guiding principle and show that in some settings, it can indeed be indicative of the performance of architectures. To find architectures with small description length, we propose beta-LASSO, a simple variant of LASSO algorithm that, when applied on fully-connected networks for image classification tasks, learns architectures with local connections and achieves state-of-the-art accuracies for training fully-connected networks on CIFAR-10 (84.50%), CIFAR-100 (57.76%) and SVHN (93.84%) bridging the gap between fully-connected and convolutional networks.

Chat is not available.