Timezone: »

Adaptive Risk Minimization: Learning to Adapt to Domain Shift
Marvin Zhang · Henrik Marklund · Nikita Dhawan · Abhishek Gupta · Sergey Levine · Chelsea Finn

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None

A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution. However, this assumption is violated in almost all practical applications: machine learning systems are regularly tested under distribution shift, due to changing temporal correlations, atypical end users, or other factors. In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts, corresponding to new domains or domain distributions. Most prior methods aim to learn a single robust model or invariant feature space that performs well on all domains. In contrast, we aim to learn models that adapt at test time to domain shift using unlabeled test points. Our primary contribution is to introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains. Compared to prior methods for robustness, invariance, and adaptation, ARM methods provide performance gains of 1-4% test accuracy on a number of image classification problems exhibiting domain shift.

Author Information

Marvin Zhang (UC Berkeley)
Henrik Marklund (Stanford University)
Nikita Dhawan (University of California, Berkeley)
Abhishek Gupta (University of California, Berkeley)
Sergey Levine (UC Berkeley)
Chelsea Finn (Stanford University)

More from the Same Authors