Timezone: »

 
Poster
Adversarial Example Games
Joey Bose · Gauthier Gidel · Hugo Berard · Andre Cianflone · Pascal Vincent · Simon Lacoste-Julien · Will Hamilton

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #906
The existence of adversarial examples capable of fooling trained neural network classifiers calls for a much better understanding of possible attacks to guide the development of safeguards against them. This includes attack methods in the challenging {\em non-interactive blackbox} setting, where adversarial attacks are generated without any access, including queries, to the target model. Prior attacks in this setting have relied mainly on algorithmic innovations derived from empirical observations (e.g., that momentum helps), lacking principled transferability guarantees. In this work, we provide a theoretical foundation for crafting transferable adversarial examples to entire hypothesis classes. We introduce \textit{Adversarial Example Games} (AEG), a framework that models the crafting of adversarial examples as a min-max game between a generator of attacks and a classifier. AEG provides a new way to design adversarial examples by adversarially training a generator and a classifier from a given hypothesis class (e.g., architecture). We prove that this game has an equilibrium, and that the optimal generator is able to craft adversarial examples that can attack any classifier from the corresponding hypothesis class. We demonstrate the efficacy of AEG on the MNIST and CIFAR-10 datasets, outperforming prior state-of-the-art approaches with an average relative improvement of $29.9\%$ and $47.2\%$ against undefended and robust models (Table \ref{table:q2} \& \ref{table:q3}) respectively.

Author Information

Joey Bose (McGill/MILA)

I’m a PhD student at the RLLab at McGill/MILA where I work on Adversarial Machine Learning on Graphs. Previously, I was a Master’s student at the University of Toronto where I researched crafting Adversarial Attacks on Computer Vision models using GAN’s. I also interned at Borealis AI where I was working on applying adversarial learning principles to learn better embeddings i.e. Word Embeddings for Machine Learning models.

Gauthier Gidel (Mila)

I am a Ph.D student supervised by Simon Lacoste-Julien, I graduated from ENS Ulm and Université Paris-Saclay. I was a visiting PhD student at Sierra. I also worked for 6 months as a freelance Data Scientist for Monsieur Drive (Acquired by Criteo) and I recently co-founded a startup called Krypto. I'm currently pursuing my PhD at Mila. My work focuses on optimization applied to machine learning. More details can be found in my resume. My research is to develop new optimization algorithms and understand the role of optimization in the learning procedure, in short, learn faster and better. I identify to the field of machine learning (NIPS, ICML, AISTATS and ICLR) and optimization (SIAM OP)

Hugo Berard (Mila & Facebook AI Research)
Andre Cianflone (Mila/McGill)

I am a PhD student at McGill University and part of the RLLab and Mila lab. I research machine learning, specifically Theory of Mind, Reinforcement Learning, and Emergent Communication.

Pascal Vincent (Facebook and U. Montreal)
Simon Lacoste-Julien (Mila, Université de Montréal & SAIL Montreal)

Simon Lacoste-Julien is an associate professor at Mila and DIRO from Université de Montréal, and Canada CIFAR AI Chair holder. He also heads part time the SAIT AI Lab Montreal from Samsung. His research interests are machine learning and applied math, with applications in related fields like computer vision and natural language processing. He obtained a B.Sc. in math., physics and computer science from McGill, a PhD in computer science from UC Berkeley and a post-doc from the University of Cambridge. He spent a few years as a research faculty at INRIA and École normale supérieure in Paris before coming back to his roots in Montreal in 2016 to answer the call from Yoshua Bengio in growing the Montreal AI ecosystem.

Will Hamilton (McGill)

More from the Same Authors