`

Timezone: »

 
Poster
Adversarial Attack Generation Empowered by Min-Max Optimization
Jingkang Wang · Tianyun Zhang · Sijia Liu · Pin-Yu Chen · Jiacen Xu · Makan Fardad · Bo Li

Wed Dec 08 04:30 PM -- 06:00 PM (PST) @ Virtual #None

The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness. Nevertheless, min-max optimization beyond the purpose of AT has not been rigorously explored in the adversarial context. In this paper, we show how a general notion of min-max optimization over multiple domains can be leveraged to the design of different types of adversarial attacks. In particular, given a set of risk sources, minimizing the worst-case attack loss can be reformulated as a min-max problem by introducing domain weights that are maximized over the probability simplex of the domain set. We showcase this unified framework in three attack generation problems -- attacking model ensembles, devising universal perturbation under multiple inputs, and crafting attacks resilient to data transformations. Extensive experiments demonstrate that our approach leads to substantial attack improvement over the existing heuristic strategies as well as robustness improvement over state-of-the-art defense methods against multiple perturbation types. Furthermore, we find that the self-adjusted domain weights learned from min-max optimization can provide a holistic tool to explain the difficulty level of attack across domains.

Author Information

Jingkang Wang (Uber ATG, University of Toronto)
Tianyun Zhang (Cleveland State University)
Sijia Liu (Michigan State University)
Pin-Yu Chen (IBM Research AI)
Jiacen Xu (University of California, Irvine)
Makan Fardad (Syracuse University)
Bo Li (UIUC)

More from the Same Authors