Timezone: »
Poster
Make Some Noise: Reliable and Efficient Single-Step Adversarial Training
Pau de Jorge Aranda · Adel Bibi · Riccardo Volpi · Amartya Sanyal · Philip Torr · Gregory Rogez · Puneet Dokania
Recently, Wong et al. (2020) showed that adversarial training with single-step FGSM leads to a characteristic failure mode named catastrophic overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks. Experimentally they showed that simply adding a random perturbation prior to FGSM (RS-FGSM) could prevent CO. However, Andriushchenko & Flammarion (2020) observed that RS-FGSM still leads to CO for larger perturbations, and proposed a computationally expensive regularizer (GradAlign) to avoid it. In this work, we methodically revisit the role of noise and clipping in single-step adversarial training. Contrary to previous intuitions, we find that using a stronger noise around the clean sample combined with \textit{not clipping} is highly effective in avoiding CO for large perturbation radii. We then propose Noise-FGSM (N-FGSM) that, while providing the benefits of single-step adversarial training, does not suffer from CO. Empirical analyses on a large suite of experiments show that N-FGSM is able to match or surpass the performance of previous state of-the-art GradAlign while achieving 3$\times$ speed-up.
Author Information
Pau de Jorge Aranda (University of Oxford & Naver Labs Europe)

I'm a a PhD student at the University of Oxford and Naver Labs Europe. My research interests include but are not limited to deep learning, computer vision, and machine learning.
Adel Bibi (University of Oxford)
Riccardo Volpi (Naver Labs Europe)
Amartya Sanyal (ETH Zurich)
Philip Torr (University of Oxford)
Gregory Rogez (NAVER LABS Europe)
Puneet Dokania (Five AI and University of Oxford)
More from the Same Authors
-
2021 : Occluded Video Instance Segmentation: Dataset and ICCV 2021 Challenge »
Jiyang Qi · Yan Gao · Yao Hu · Xinggang Wang · Xiaoyu Liu · Xiang Bai · Serge Belongie · Alan Yuille · Philip Torr · Song Bai -
2021 : Are Vision Transformers Always More Robust Than Convolutional Neural Networks? »
Francesco Pinto · Philip Torr · Puneet Dokania -
2021 : Mix-MaxEnt: Improving Accuracy and Uncertainty Estimates of Deterministic Neural Networks »
Francesco Pinto · Harry Yang · Ser Nam Lim · Philip Torr · Puneet Dokania -
2022 : Certified defences hurt generalisation »
Piersilvio De Bartolomeis · Jacob Clarysse · Fanny Yang · Amartya Sanyal -
2022 : Certified defences hurt generalisation »
Piersilvio De Bartolomeis · Jacob Clarysse · Fanny Yang · Amartya Sanyal -
2022 Poster: Using Mixup as a Regularizer Can Surprisingly Improve Accuracy & Out-of-Distribution Robustness »
Francesco Pinto · Harry Yang · Ser Nam Lim · Philip Torr · Puneet Dokania -
2022 Poster: Structure-Preserving 3D Garment Modeling with Neural Sewing Machines »
Xipeng Chen · Guangrun Wang · Dizhong Zhu · Xiaodan Liang · Philip Torr · Liang Lin -
2022 Poster: Learn what matters: cross-domain imitation learning with task-relevant embeddings »
Tim Franzmeyer · Philip Torr · João Henriques -
2022 Poster: FedSR: A Simple and Effective Domain Generalization Method for Federated Learning »
A. Tuan Nguyen · Philip Torr · Ser Nam Lim -
2021 : Shape-Tailored Deep Neural Networks With PDEs »
Naeemullah Khan · Angira Sharma · Philip Torr · Ganesh Sundaramoorthi -
2021 Poster: You Never Cluster Alone »
Yuming Shen · Ziyi Shen · Menghan Wang · Jie Qin · Philip Torr · Ling Shao -
2021 Poster: Looking Beyond Single Images for Contrastive Semantic Segmentation Learning »
FEIHU ZHANG · Philip Torr · Rene Ranftl · Stephan Richter -
2021 Poster: FACMAC: Factored Multi-Agent Centralised Policy Gradients »
Bei Peng · Tabish Rashid · Christian Schroeder de Witt · Pierre-Alexandre Kamienny · Philip Torr · Wendelin Boehmer · Shimon Whiteson -
2021 Poster: Do Different Tracking Tasks Require Different Appearance Models? »
Zhongdao Wang · Hengshuang Zhao · Ya-Li Li · Shengjin Wang · Philip Torr · Luca Bertinetto -
2021 Poster: A Continuous Mapping For Augmentation Design »
Keyu Tian · Chen Lin · Ser Nam Lim · Wanli Ouyang · Puneet Dokania · Philip Torr -
2021 Poster: Overcoming the Convex Barrier for Simplex Inputs »
Harkirat Singh Behl · M. Pawan Kumar · Philip Torr · Krishnamurthy Dvijotham -
2020 : 19 - Choice of Representation Matters for Adversarial Robustness »
Amartya Sanyal -
2020 Poster: STEER : Simple Temporal Regularization For Neural ODE »
Arnab Ghosh · Harkirat Singh Behl · Emilien Dupont · Philip Torr · Vinay Namboodiri -
2020 Poster: Calibrating Deep Neural Networks using Focal Loss »
Jishnu Mukhoti · Viveka Kulharia · Amartya Sanyal · Stuart Golodetz · Philip Torr · Puneet Dokania -
2020 Poster: Lightweight Generative Adversarial Networks for Text-Guided Image Manipulation »
Bowen Li · Xiaojuan Qi · Philip Torr · Thomas Lukasiewicz -
2020 Poster: Continual Learning in Low-rank Orthogonal Subspaces »
Arslan Chaudhry · Naeemullah Khan · Puneet Dokania · Philip Torr -
2019 : Coffee + Posters »
Changhao Chen · Nils Gählert · Edouard Leurent · Johannes Lehner · Apratim Bhattacharyya · Harkirat Singh Behl · Teck Yian Lim · Shiho Kim · Jelena Novosel · Błażej Osiński · Arindam Das · Ruobing Shen · Jeffrey Hawke · Joachim Sicking · Babak Shahian Jahromi · Theja Tulabandhula · Claudio Michaelis · Evgenia Rusak · WENHANG BAO · Hazem Rashed · JP Chen · Amin Ansari · Jaekwang Cha · Mohamed Zahran · Daniele Reda · Jinhyuk Kim · Kim Dohyun · Ho Suk · Junekyo Jhung · Alexander Kister · Matthias Fahrland · Adam Jakubowski · Piotr Miłoś · Jean Mercat · Bruno Arsenali · Silviu Homoceanu · Xiao-Yang Liu · Philip Torr · Ahmad El Sallab · Ibrahim Sobh · Anurag Arnab · Krzysztof Galias -
2019 Poster: Multi-Agent Common Knowledge Reinforcement Learning »
Christian Schroeder de Witt · Jakob Foerster · Gregory Farquhar · Philip Torr · Wendelin Boehmer · Shimon Whiteson -
2019 Poster: Efficient Probabilistic Inference in the Quest for Physics Beyond the Standard Model »
Atilim Gunes Baydin · Lei Shao · Wahid Bhimji · Lukas Heinrich · Saeid Naderiparizi · Andreas Munk · Jialin Liu · Bradley Gram-Hansen · Gilles Louppe · Lawrence Meadows · Philip Torr · Victor Lee · Kyle Cranmer · Mr. Prabhat · Frank Wood -
2019 Poster: Controllable Text-to-Image Generation »
Bowen Li · Xiaojuan Qi · Thomas Lukasiewicz · Philip Torr -
2018 Poster: A Unified View of Piecewise Linear Neural Network Verification »
Rudy Bunel · Ilker Turkaslan · Philip Torr · Pushmeet Kohli · Pawan K Mudigonda -
2017 Poster: Learning Disentangled Representations with Semi-Supervised Deep Generative Models »
Siddharth Narayanaswamy · Brooks Paige · Jan-Willem van de Meent · Alban Desmaison · Noah Goodman · Pushmeet Kohli · Frank Wood · Philip Torr -
2016 Poster: Adaptive Neural Compilation »
Rudy Bunel · Alban Desmaison · Pawan K Mudigonda · Pushmeet Kohli · Philip Torr -
2016 Poster: Learning feed-forward one-shot learners »
Luca Bertinetto · João Henriques · Jack Valmadre · Philip Torr · Andrea Vedaldi -
2013 Poster: Higher Order Priors for Joint Intrinsic Image, Objects, and Attributes Estimation »
Vibhav Vineet · Carsten Rother · Philip Torr -
2011 Poster: Learning Anchor Planes for Classification »
Ziming Zhang · Lubor Ladicky · Philip Torr · Amir Saffari -
2011 Demonstration: Online structured-output learning for real-time object tracking and detection »
Sam Hare · Amir Saffari · Philip Torr -
2008 Poster: Improved Moves for Truncated Convex Models »
Pawan K Mudigonda · Philip Torr -
2008 Spotlight: Improved Moves for Truncated Convex Models »
Pawan K Mudigonda · Philip Torr -
2007 Oral: An Analysis of Convex Relaxations for MAP Estimation »
Pawan K Mudigonda · Vladimir Kolmogorov · Philip Torr -
2007 Poster: An Analysis of Convex Relaxations for MAP Estimation »
Pawan K Mudigonda · Vladimir Kolmogorov · Philip Torr