Timezone: »
Poster
Morié Attack (MA): A New Potential Risk of Screen Photos
Dantong Niu · Ruohao Guo · Yisen Wang
Images, captured by a camera, play a critical role in training Deep Neural Networks (DNNs). Usually, we assume the images acquired by cameras are consistent with the ones perceived by human eyes. However, due to the different physical mechanisms between human-vision and computer-vision systems, the final perceived images could be very different in some cases, for example shooting on digital monitors. In this paper, we find a special phenomenon in digital image processing, the moiré effect, that could cause unnoticed security threats to DNNs. Based on it, we propose a Moiré Attack (MA) that generates the physical-world moiré pattern adding to the images by mimicking the shooting process of digital devices. Extensive experiments demonstrate that our proposed digital Moiré Attack (MA) is a perfect camouflage for attackers to tamper with DNNs with a high success rate ($100.0\%$ for untargeted and $97.0\%$ for targeted attack with the noise budget $\epsilon=4$), high transferability rate across different models, and high robustness under various defenses. Furthermore, MA owns great stealthiness because the moiré effect is unavoidable due to the camera's inner physical structure, which therefore hardly attracts the awareness of humans. Our code is available at https://github.com/Dantong88/Moire_Attack.
Author Information
Dantong Niu (University of California Berkeley)
Ruohao Guo (China Agricultural University)
Yisen Wang (Peking University)
More from the Same Authors
-
2021 Spotlight: Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State »
Mingqing Xiao · Qingyan Meng · Zongpeng Zhang · Yisen Wang · Zhouchen Lin -
2021 Spotlight: Clustering Effect of Adversarial Robust Models »
Yang Bai · Xin Yan · Yong Jiang · Shu-Tao Xia · Yisen Wang -
2022 Poster: Improving Out-of-Distribution Generalization by Adversarial Training with Structured Priors »
Qixun Wang · Yifei Wang · Hong Zhu · Yisen Wang -
2022 Poster: When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture »
Yichuan Mo · Dongxian Wu · Yifei Wang · Yiwen Guo · Yisen Wang -
2022 Spotlight: Lightning Talks 6A-2 »
Yichuan Mo · Botao Yu · Gang Li · Zezhong Xu · Haoran Wei · Arsene Fansi Tchango · Raef Bassily · Haoyu Lu · Qi Zhang · Songming Liu · Mingyu Ding · Peiling Lu · Yifei Wang · Xiang Li · Dongxian Wu · Ping Guo · Wen Zhang · Hao Zhongkai · Mehryar Mohri · Rishab Goel · Yisen Wang · Yifei Wang · Yangguang Zhu · Zhi Wen · Ananda Theertha Suresh · Chengyang Ying · Yujie Wang · Peng Ye · Rui Wang · Nanyi Fei · Hui Chen · Yiwen Guo · Wei Hu · Chenglong Liu · Julien Martel · Yuqi Huo · Wu Yichao · Hang Su · Yisen Wang · Peng Wang · Huajun Chen · Xu Tan · Jun Zhu · Ding Liang · Zhiwu Lu · Joumana Ghosn · Shanshan Zhang · Wei Ye · Ze Cheng · Shikun Zhang · Tao Qin · Tie-Yan Liu -
2022 Spotlight: How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders »
Qi Zhang · Yifei Wang · Yisen Wang -
2022 Spotlight: When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture »
Yichuan Mo · Dongxian Wu · Yifei Wang · Yiwen Guo · Yisen Wang -
2022 Spotlight: Lightning Talks 1B-3 »
Chaofei Wang · Qixun Wang · Jing Xu · Long-Kai Huang · Xi Weng · Fei Ye · Harsh Rangwani · shrinivas ramasubramanian · Yifei Wang · Qisen Yang · Xu Luo · Lei Huang · Adrian G. Bors · Ying Wei · Xinglin Pan · Sho Takemori · Hong Zhu · Rui Huang · Lei Zhao · Yisen Wang · Kato Takashi · Shiji Song · Yanan Li · Rao Anwer · Yuhei Umeda · Salman Khan · Gao Huang · Wenjie Pei · Fahad Shahbaz Khan · Venkatesh Babu R · Zenglin Xu -
2022 Spotlight: Improving Out-of-Distribution Generalization by Adversarial Training with Structured Priors »
Qixun Wang · Yifei Wang · Hong Zhu · Yisen Wang -
2022 Poster: How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders »
Qi Zhang · Yifei Wang · Yisen Wang -
2021 Poster: Clustering Effect of Adversarial Robust Models »
Yang Bai · Xin Yan · Yong Jiang · Shu-Tao Xia · Yisen Wang -
2021 Poster: On Training Implicit Models »
Zhengyang Geng · Xin-Yu Zhang · Shaojie Bai · Yisen Wang · Zhouchen Lin -
2021 Poster: Dissecting the Diffusion Process in Linear Graph Convolutional Networks »
Yifei Wang · Yisen Wang · Jiansheng Yang · Zhouchen Lin -
2021 Poster: Adversarial Neuron Pruning Purifies Backdoored Deep Models »
Dongxian Wu · Yisen Wang -
2021 Poster: Gauge Equivariant Transformer »
Lingshen He · Yiming Dong · Yisen Wang · Dacheng Tao · Zhouchen Lin -
2021 Poster: Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State »
Mingqing Xiao · Qingyan Meng · Zongpeng Zhang · Yisen Wang · Zhouchen Lin -
2021 Poster: Efficient Equivariant Network »
Lingshen He · Yuxuan Chen · zhengyang shen · Yiming Dong · Yisen Wang · Zhouchen Lin -
2021 Poster: Towards a Unified Game-Theoretic View of Adversarial Perturbations and Robustness »
Jie Ren · Die Zhang · Yisen Wang · Lu Chen · Zhanpeng Zhou · Yiting Chen · Xu Cheng · Xin Wang · Meng Zhou · Jie Shi · Quanshi Zhang -
2021 Poster: Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks »
Hanxun Huang · Yisen Wang · Sarah Erfani · Quanquan Gu · James Bailey · Xingjun Ma -
2021 Poster: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks »
Chen Ma · Xiangyu Guo · Li Chen · Jun-Hai Yong · Yisen Wang -
2021 Poster: Residual Relaxation for Multi-view Representation Learning »
Yifei Wang · Zhengyang Geng · Feng Jiang · Chuming Li · Yisen Wang · Jiansheng Yang · Zhouchen Lin -
2020 Poster: Adversarial Weight Perturbation Helps Robust Generalization »
Dongxian Wu · Shu-Tao Xia · Yisen Wang