Timezone: »

 
Poster
Your Classifier can Secretly Suffice Multi-Source Domain Adaptation
Naveen Venkat · Jogendra Nath Kundu · Durgesh Singh · Ambareesh Revanur · Venkatesh Babu R

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #1004

Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain, under a domain-shift. Existing methods aim to minimize this domain-shift using auxiliary distribution alignment objectives. In this work, we present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision. Thus, we aim to utilize implicit alignment without additional training objectives to perform adaptation. To this end, we use pseudo-labeled target samples and enforce a classifier agreement on the pseudo-labels, a process called Self-supervised Implicit Alignment (SImpAl). We find that SImpAl readily works even under category-shift among the source domains. Further, we propose classifier agreement as a cue to determine the training convergence, resulting in a simple training algorithm. We provide a thorough evaluation of our approach on five benchmarks, along with detailed insights into each component of our approach.

Author Information

Naveen Venkat (Indian Institute of Science, Bangalore)

I am a graduate student at Carnegie Mellon University Robotics Institute. My research interests include efficient (unsupervised, self-supervised) Deep Learning methods for Computer Vision.

Jogendra Nath Kundu (Indian Institute of Science)
Durgesh Singh (Indian Institute of Science)
Ambareesh Revanur (Indian Institute of Science)
Venkatesh Babu R (Indian Institute of Science)

More from the Same Authors