Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: Latinx in AI

Delta Data Augmentation: Enhancing Adversarial Robustness with Adversarial Sampling

Ivan Reyes-Amezcua · Jorge Gonzalez Zapata · Gilberto Ochoa-Ruiz · Andres Mendez-Vazquez


Abstract:

Deep neural networks are vulnerable to adversarial attacks, making adversarial robustness a pressing issue in deep learning. Recent research has demonstrated that even small perturbations to the input data can have a large impact on the model's output, making it susceptible to adversarial attacks. In this work, we introduce Delta Data Augmentation (DDA), a data augmentation method that enhances transfer adversarial robustness by sampling perturbations extracted from models that have been robustly trained against adversarial attacks. Our work gathers adversarial perturbations from higher-level tasks instead of directly targeting the model. By incorporating these perturbations into the training of subsequent tasks, our method endeavors to augment both the robustness and adversarial diversity inherent to the datasets. Through rigorous empirical analysis, we demonstrate the advantages of our data augmentation method over the current state-of-the-art in adversarial robustness, particularly when subjected to Projected Gradient Descent (PGD) with L2 and L-infinity attacks for CIFAR10, CIFAR100, SVHN, MNIST, and FashionMNIST datasets.

Chat is not available.