Banner

Oriol Vinyals, Yangqing Jia, Li Deng, Trevor Darrell

UC Berkeley; UC Berkeley; Microsoft Research; UC Berkeley

Poster: Learning with Recursive Perceptual Representations

2:00pm – 12:00am Thursday, December 06, 2012

Harrah’s Special Events Center 2nd Floor

This is part of the Poster Session which begins at 14:00 on Thursday December 6, 2012

Th76

Linear Support Vector Machines (SVMs) have become very popular in vision as part of state-of-the-art object recognition and other classification tasks but require high dimensional feature spaces for good performance. Deep learning methods can find more compact representations but current methods employ multilayer perceptrons that require solving a difficult, non-convex optimization problem. We propose a deep non-linear classifier whose layers are SVMs and which incorporates random projection as its core stacking element. Our method learns layers of linear SVMs recursively transforming the original data manifold through a random projection of the weak prediction computed from each layer. Our method scales as linear SVMs, does not rely on any kernel computations or nonconvex optimization, and exhibits better generalization ability than kernel-based SVMs. This is especially true when the number of training samples is smaller than the dimensionality of data, a common scenario in many real-world applications. The use of random projections is key to our method, as we show in the experiments section, in which we observe a consistent improvement over previous --often more complicated-- methods on several vision and speech benchmarks.

Download the PDF