Skip to yearly menu bar Skip to main content


DP-Mix: Mixup-based Data Augmentation for Differentially Private Learning

Wenxuan Bao · Francesco Pittaluga · Vijay Kumar B G · Vincent Bindschaedler

Great Hall & Hall B1+B2 (level 1) #1600


Data augmentation techniques, such as image transformations and combinations, are highly effective at improving the generalization of computer vision models, especially when training data is limited. However, such techniques are fundamentally incompatible with differentially private learning approaches, due to the latter’s built-in assumption that each training image’s contribution to the learned model is bounded. In this paper, we investigate why naive applications of multi-sample data augmentation techniques, such as mixup, fail to achieve good performance and propose two novel data augmentation techniques specifically designed for the constraints of differentially private learning. Our first technique, DP-MixSelf, achieves SoTA classification performance across a range of datasets and settings by performing mixup on self-augmented data. Our second technique, DP-MixDiff, further improves performance by incorporating synthetic data from a pre-trained diffusion model into the mixup process. We open-source the code at

Chat is not available.