Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Distribution Shifts: New Frontiers with Foundation Models

Are all classes created equal? Domain Generalization for Domain-Linked Classes

Kimathi Kaai · Saad Hossain · Sirisha Rambhatla

Keywords: [ transfer learning ] [ Fairness ] [ Domain generalization ] [ distribution shifts ]


Abstract: Domain generalization (DG) focuses on transferring domain-invariant knowledge from multiple source domains (available at train time) to an a priori unseen target domain(s). This task implicitly assumes that a class of interest is expressed in multiple source domains (domain-shared), which helps break the spurious correlations between domain and class and enables domain-invariant learning. However, we observe that this results in extremely poor generalization performance for classes only expressed in a specific domain (domain-linked). To this end, we develop a contrastive and fairness based algorithm -- FOND -- to learn generalizable representations for these domain-linked classes by transferring useful representations from domain-shared classes. We perform rigorous experiments against popular baselines across benchmark datasets to demonstrate that given a sufficient number of domain-shared classes FOND achieves SOTA results for domain-linked DG.

Chat is not available.