Timezone: »

Theoretical evidence for adversarial robustness through randomization
Rafael Pinot · Laurent Meunier · Alexandre Araujo · Hisashi Kashima · Florian Yger · Cedric Gouy-Pailler · Jamal Atif

Thu Dec 12 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #100

This paper investigates the theory of robustness against adversarial attacks. It focuses on the family of randomization techniques that consist in injecting noise in the network at inference time. These techniques have proven effective in many contexts, but lack theoretical arguments. We close this gap by presenting a theo- retical analysis of these approaches, hence explaining why they perform well in practice. More precisely, we make two new contributions. The first one relates the randomization rate to robustness to adversarial attacks. This result applies for the general family of exponential distributions, and thus extends and unifies the previous approaches. The second contribution consists in devising a new upper bound on the adversarial risk gap of randomized neural networks. We support our theoretical claims with a set of experiments.

Author Information

Rafael Pinot (Dauphine University - CEA LIST Institute)
Laurent Meunier (Dauphine University - FAIR Paris)
Alexandre Araujo (Université Paris-Dauphine)
Hisashi Kashima (Kyoto University/RIKEN Center for AIP)
Florian Yger (Université Paris-Dauphine)
Cedric Gouy-Pailler (CEA)
Jamal Atif (Université Paris-Dauphine)

More from the Same Authors