Skip to yearly menu bar Skip to main content


Poster

Understanding the detrimental class-level effects of data augmentation

Polina Kirichenko · Mark Ibrahim · Randall Balestriero · Diane Bouchacourt · Shanmukha Ramakrishna Vedantam · Hamed Firooz · Andrew Wilson

Great Hall & Hall B1+B2 (level 1) #1619
[ ]
Thu 14 Dec 3 p.m. PST — 5 p.m. PST

Abstract:

Data augmentation (DA) encodes invariance and provides implicit regularization critical to a model's performance in image classification tasks. However, while DA improves average accuracy, recent studies have shown that its impact can be highly class dependent: achieving optimal average accuracy comes at the cost of significantly hurting individual class accuracy by as much as 20% on ImageNet. There has been little progress in resolving class-level accuracy drops due to a limited understanding of these effects. In this work, we present a framework for understanding how DA interacts with class-level learning dynamics. Using higher-quality multi-label annotations on ImageNet, we systematically categorize the affected classes and find that the majority are inherently ambiguous, co-occur, or involve fine-grained distinctions, while DA controls the model's bias towards one of the closely related classes. While many of the previously reported performance drops are explained by multi-label annotations, we identify other sources of accuracy degradations by analyzing class confusions. We show that simple class-conditional augmentation strategies informed by our framework improve performance on the negatively affected classes.

Chat is not available.