Timezone: »

Domain Generalization via Entropy Regularization
Shanshan Zhao · Mingming Gong · Tongliang Liu · Huan Fu · Dacheng Tao

Mon Dec 07 09:00 PM -- 11:00 PM (PST) @ Poster Session 0 #107

Domain generalization aims to learn from multiple source domains a predictive model that can generalize to unseen target domains. One essential problem in domain generalization is to learn discriminative domain-invariant features. To arrive at this, some methods introduce a domain discriminator through adversarial learning to match the feature distributions in multiple source domains. However, adversarial training can only guarantee that the learned features have invariant marginal distributions, while the invariance of conditional distributions is more important for prediction in new domains. To ensure the conditional invariance of learned features, we propose an entropy regularization term that measures the dependency between the learned features and the class labels. Combined with the typical task-related loss, e.g., cross-entropy loss for classification, and adversarial loss for domain discrimination, our overall objective is guaranteed to learn conditional-invariant features across all source domains and thus can learn classifiers with better generalization capabilities. We demonstrate the effectiveness of our method through comparison with state-of-the-art methods on both simulated and real-world datasets. Code is available at: https://github.com/sshan-zhao/DGviaER.

Author Information

Shanshan Zhao (The University of Sydney)
Mingming Gong (University of Melbourne)
Tongliang Liu (The University of Sydney)
Huan Fu (Alibaba Group)
Dacheng Tao (University of Sydney)

More from the Same Authors