Skip to yearly menu bar Skip to main content


Learning Transferrable Representations for Unsupervised Domain Adaptation

Ozan Sener · Hyun Oh Song · Ashutosh Saxena · Silvio Savarese

Area 5+6+7+8 #32

Keywords: [ Regularization and Large Margin Methods ] [ Multi-task and Transfer Learning ] [ (Application) Object and Pattern Recognition ] [ (Application) Computer Vision ] [ Deep Learning or Neural Networks ]


Supervised learning with large scale labelled datasets and deep layered models has caused a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers from generalization issues under the presence of a domain shift between the training and the test data distribution. Since unsupervised domain adaptation algorithms directly address this domain shift problem between a labelled source dataset and an unlabelled target dataset, recent papers have shown promising results by fine-tuning the networks with domain adaptation loss functions which try to align the mismatch between the training and testing data distributions. Nevertheless, these recent deep learning based domain adaptation approaches still suffer from issues such as high sensitivity to the gradient reversal hyperparameters and overfitting during the fine-tuning stage. In this paper, we propose a unified deep learning framework where the representation, cross domain transformation, and target label inference are all jointly optimized in an end-to-end fashion for unsupervised domain adaptation. Our experiments show that the proposed method significantly outperforms state-of-the-art algorithms in both object recognition and digit classification experiments by a large margin. We will make our learned models as well as the source code available immediately upon acceptance.

Live content is unavailable. Log in and register to view live content