Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Distribution shifts: connecting methods and applications (DistShift)

Just Mix Once: Mixing Samples with Implicit Group Distribution

Giorgio Giannone · Serhii Havrylov · Jordan Massiah · Emine Yilmaz · Yunlong Jiao


Abstract:

Recent work has unveiled how average generalization frequently relies on superficial patterns in the data. The consequence are brittle models with poor performance in the presence of domain shift in group distribution at test time. When the subgroups in the train data are known we can use tools from robust optimization and regularization mechanism to tackle the problem. However group annotation and identification are daunting and time consuming tasks, seldom performed on large datasets. A recent line of research~\cite{liu2021just} is trying to solve this problem with implicit group distribution at train time, leveraging self-supervision and oversampling to improve generalization on minority groups. Following such ideas we propose a new class-conditional variant of mixup~\cite{zhang2017mixup} for worst-group generalization, augmenting the train distribution with a continuous distribution of groups. Our method, called Just Mix Once, is domain agnostic, computationally efficient and performs on par or better than state-of-the-art on worst-group generalization.

Chat is not available.