Skip to yearly menu bar Skip to main content


Poster

Theoretical Analysis of Adversarial Learning: A Minimax Approach

Zhuozhuo Tu · Jingwei Zhang · Dacheng Tao

East Exhibition Hall B + C #238

Keywords: [ Theory ] [ Learning Theory ]


Abstract:

In this paper, we propose a general theoretical method for analyzing the risk bound in the presence of adversaries. Specifically, we try to fit the adversarial learning problem into the minimax framework. We first show that the original adversarial learning problem can be transformed into a minimax statistical learning problem by introducing a transport map between distributions. Then, we prove a new risk bound for this minimax problem in terms of covering numbers under a weak version of Lipschitz condition. Our method can be applied to multi-class classification and popular loss functions including the hinge loss and ramp loss. As some illustrative examples, we derive the adversarial risk bounds for SVMs and deep neural networks, and our bounds have two data-dependent terms, which can be optimized for achieving adversarial robustness.

Live content is unavailable. Log in and register to view live content