This is the public, feature-limited version of the conference webpage. After Registration and login please visit the full version.

Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses

Gaurang Sriramanan, Sravanti Addepalli, Arya Baburaj, Venkatesh Babu R

Spotlight presentation: Orals & Spotlights Track 20: Social/Adversarial Learning
on 2020-12-09T07:50:00-08:00 - 2020-12-09T08:00:00-08:00
Poster Session 4 (more posters)
on 2020-12-09T09:00:00-08:00 - 2020-12-09T11:00:00-08:00
Abstract: Advances in the development of adversarial attacks have been fundamental to the progress of adversarial defense research. Efficient and effective attacks are crucial for reliable evaluation of defenses, and also for developing robust models. Adversarial attacks are often generated by maximizing standard losses such as the cross-entropy loss or maximum-margin loss within a constraint set using Projected Gradient Descent (PGD). In this work, we introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training. We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries, thereby resulting in stronger attacks. We evaluate our attack against multiple defenses and show improved performance when compared to existing attacks. Further, we propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses by utilizing the proposed relaxation term for both attack generation and training.

Preview Video and Chat

To see video, interact with the author and ask questions please use registration and login.