`

Timezone: »

 
Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining and Consistency
Samarth Mishra · Kate Saenko · Venkatesh Saligrama
Event URL: https://openreview.net/forum?id=sqBIm0Irju7 »

Most modern unsupervised domain adaptation (UDA) approaches are rooted in domain alignment, i.e., learning to align source and target features to learn a target domain classifier using source labels. In semi-supervised domain adaptation (SSDA), when the learner can access few target domain labels, prior approaches have followed UDA theory to use domain alignment for learning. We show that the case of SSDA is different and a good target classifier can be learned without needing explicit alignment. We use self-supervised pretraining and consistency regularization to achieve well separated target clusters, aiding in learning a low error target classifier, allowing our method to outperform recent state of the art approaches on large, challenging benchmarks like DomainNet and VisDA-17.

Author Information

Samarth Mishra (Boston University)
Kate Saenko (Boston University & MIT-IBM Watson AI Lab, IBM Research)
Venkatesh Saligrama (Boston University)

More from the Same Authors