Skip to yearly menu bar Skip to main content


Poster

Max-Margin Invariant Features from Transformed Unlabelled Data

Dipan Pal · Ashwin Kannan · Gautam Arakalgud · Marios Savvides

Pacific Ballroom #66

Keywords: [ Representation Learning ] [ Kernel Methods ]


Abstract:

The study of representations invariant to common transformations of the data is important to learning. Most techniques have focused on local approximate invariance implemented within expensive optimization frameworks lacking explicit theoretical guarantees. In this paper, we study kernels that are invariant to a unitary group while having theoretical guarantees in addressing the important practical issue of unavailability of transformed versions of labelled data. A problem we call the Unlabeled Transformation Problem which is a special form of semi-supervised learning and one-shot learning. We present a theoretically motivated alternate approach to the invariant kernel SVM based on which we propose Max-Margin Invariant Features (MMIF) to solve this problem. As an illustration, we design an framework for face recognition and demonstrate the efficacy of our approach on a large scale semi-synthetic dataset with 153,000 images and a new challenging protocol on Labelled Faces in the Wild (LFW) while out-performing strong baselines.

Live content is unavailable. Log in and register to view live content