Timezone: »

 
Domain Generalization with Nuclear Norm Regularization
Zhenmei Shi · Yifei Ming · Ying Fan · Frederic Sala · Yingyu Liang
Event URL: https://openreview.net/forum?id=mCKNHWWLd1 »

The ability to generalize to unseen domains is crucial for machine learning systems, especially when we only have data from limited training domains and must deploy the resulting models in the real world. In this paper, we study domain generalization via the classic empirical risk minimization (ERM) approach with a simple regularizer based on the nuclear norm of the learned features from the training set. Theoretically, we provide intuitions on why nuclear norm regularization works better than ERM and ERM with L2 weight decay in linear settings. Empirically, we show that nuclear norm regularization achieves state-of-the-art average accuracy compared to existing methods in a wide range of domain generalization tasks (e.g. 1.7\% test accuracy improvements over the second-best baseline on DomainNet).

Author Information

Zhenmei Shi (University of Wisconsin, Madison)
Yifei Ming (University of Wisconsin-Madison)

I'm a Ph.D. student at the University of Wisconsin-Madison. I’m broadly interested in trustworthy machine learning and representation learning. Research topics that I am currently focusing on include: out-of-distribution detection, domain generalization, supervised and self-supervised (multi-modal) representation learning. My prior research involves designing efficient algorithms and promoting fundamental understandings to enable reliable open-world learning. (e.g., impact of spurious correlation, sample efficiency, and multi-modality).

Ying Fan (University of Wisconsin-Madison)
Frederic Sala (University of Wisconsin, Madison)
Yingyu Liang (University of Wisconsin Madison)

More from the Same Authors