Skip to yearly menu bar Skip to main content


Large Margin Discriminant Dimensionality Reduction in Prediction Space

Ehsan Saberian · Jose Costa Pereira · Nuno Nvasconcelos · Can Xu

Area 5+6+7+8 #35

Keywords: [ Ensemble Methods and Boosting ] [ Regularization and Large Margin Methods ] [ Nonlinear Dimension Reduction and Manifold Learning ] [ (Other) Classification ] [ (Application) Computer Vision ]


In this paper we establish a duality between boosting and SVM, and use this to derive a novel discriminant dimensionality reduction algorithm. In particular, using the multiclass formulation of boosting and SVM we note that both use a combination of mapping and linear classification to maximize the multiclass margin. In SVM this is implemented using a pre-defined mapping (induced by the kernel) and optimizing the linear classifiers. In boosting the linear classifiers are pre-defined and the mapping (predictor) is learned through combination of weak learners. We argue that the intermediate mapping, e.g. boosting predictor, is preserving the discriminant aspects of the data and by controlling the dimension of this mapping it is possible to achieve discriminant low dimensional representations for the data. We use the aforementioned duality and propose a new method, Large Margin Discriminant Dimensionality Reduction (LADDER) that jointly learns the mapping and the linear classifiers in an efficient manner. This leads to a data-driven mapping which can embed data into any number of dimensions. Experimental results show that this embedding can significantly improve performance on tasks such as hashing and image/scene classification.

Live content is unavailable. Log in and register to view live content