Timezone: »
The success of AlphaZero (AZ) has demonstrated that neural-network-based Go AIs can surpass human performance by a large margin. Given that the state space of Go is extremely large and a human player can play the game from any legal state, we ask whether adversarial states exist for Go AIs that may lead them to play surprisingly wrong actions.In this paper, we first extend the concept of adversarial examples to the game of Go: we generate perturbed states that are ``semantically'' equivalent to the original state by adding meaningless moves to the game, and an adversarial state is a perturbed state leading to an undoubtedly inferior action that is obvious even for Go beginners. However, searching the adversarial state is challenging due to the large, discrete, and non-differentiable search space. To tackle this challenge, we develop the first adversarial attack on Go AIs that can efficiently search for adversarial states by strategically reducing the search space. This method can also be extended to other board games such as NoGo. Experimentally, we show that the actions taken by both Policy-Value neural network (PV-NN) and Monte Carlo tree search (MCTS) can be misled by adding one or two meaningless stones; for example, on 58\% of the AlphaGo Zero self-play games, our method can make the widely used KataGo agent with 50 simulations of MCTS plays a losing action by adding two meaningless stones. We additionally evaluated the adversarial examples found by our algorithm with amateur human Go players, and 90\% of examples indeed lead the Go agent to play an obviously inferior action. Ourcode is available at \url{https://PaperCode.cc/GoAttack}.
Author Information
Li-Cheng Lan (University of California, Los Angeles)
Huan Zhang (CMU)
Ti-Rong Wu (Academia Sinica)
Meng-Yu Tsai (National Chiao Tung University)
I-Chen Wu (Academia Sinica)
Cho-Jui Hsieh (UCLA, Amazon)
More from the Same Authors
-
2022 : FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning »
Yuanhao Xiong · Ruochen Wang · Minhao Cheng · Felix Yu · Cho-Jui Hsieh -
2022 : Denoised Smoothing with Sample Rejection for Robustifying Pretrained Classifiers »
Fatemeh Sheikholeslami · Wan-Yi Lin · Jan Hendrik Metzen · Huan Zhang · J. Zico Kolter -
2022 : On the Robustness of Safe Reinforcement Learning under Observational Perturbations »
ZUXIN LIU · Zijian Guo · Zhepeng Cen · Huan Zhang · Jie Tan · Bo Li · DING ZHAO -
2022 : On the Adversarial Robustness of Vision Transformers »
Rulin Shao · Zhouxing Shi · Jinfeng Yi · Pin-Yu Chen · Cho-Jui Hsieh -
2022 : Evaluating Worst Case Adversarial Weather Perturbations Robustness »
Yihan Wang · Yunhao Ba · Howard Zhang · Huan Zhang · Achuta Kadambi · Stefano Soatto · Alex Wong · Cho-Jui Hsieh -
2022 : Closing Remarks »
Huan Zhang · Linyi Li -
2022 : Panel Discussion »
Kamalika Chaudhuri · Been Kim · Dorsa Sadigh · Huan Zhang · Linyi Li -
2022 Workshop: Trustworthy and Socially Responsible Machine Learning »
Huan Zhang · Linyi Li · Chaowei Xiao · J. Zico Kolter · Anima Anandkumar · Bo Li -
2022 : Introduction and Opening Remarks »
Huan Zhang · Linyi Li -
2022 Poster: Efficient Frameworks for Generalized Low-Rank Matrix Bandit Problems »
Yue Kang · Cho-Jui Hsieh · Thomas Chun Man Lee -
2022 Poster: Syndicated Bandits: A Framework for Auto Tuning Hyper-parameters in Contextual Bandit Algorithms »
QIN DING · Yue Kang · Yi-Wei Liu · Thomas Chun Man Lee · Cho-Jui Hsieh · James Sharpnack -
2022 Poster: ELIAS: End-to-End Learning to Index and Search in Large Output Spaces »
Nilesh Gupta · Patrick Chen · Hsiang-Fu Yu · Cho-Jui Hsieh · Inderjit Dhillon -
2022 Poster: DC-BENCH: Dataset Condensation Benchmark »
Justin CUI · Ruochen Wang · Si Si · Cho-Jui Hsieh -
2022 Poster: Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation »
Zhouxing Shi · Yihan Wang · Huan Zhang · J. Zico Kolter · Cho-Jui Hsieh -
2022 Poster: Efficient Non-Parametric Optimizer Search for Diverse Tasks »
Ruochen Wang · Yuanhao Xiong · Minhao Cheng · Cho-Jui Hsieh -
2022 Poster: Random Sharpness-Aware Minimization »
Yong Liu · Siqi Mai · Minhao Cheng · Xiangning Chen · Cho-Jui Hsieh · Yang You -
2022 Poster: General Cutting Planes for Bound-Propagation-Based Neural Network Verification »
Huan Zhang · Shiqi Wang · Kaidi Xu · Linyi Li · Bo Li · Suman Jana · Cho-Jui Hsieh · J. Zico Kolter -
2021 Poster: Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification »
Shiqi Wang · Huan Zhang · Kaidi Xu · Xue Lin · Suman Jana · Cho-Jui Hsieh · J. Zico Kolter -
2021 Poster: Learnable Fourier Features for Multi-dimensional Spatial Positional Encoding »
Yang Li · Si Si · Gang Li · Cho-Jui Hsieh · Samy Bengio -
2021 Poster: Label Disentanglement in Partition-based Extreme Multilabel Classification »
Xuanqing Liu · Wei-Cheng Chang · Hsiang-Fu Yu · Cho-Jui Hsieh · Inderjit Dhillon -
2021 Poster: DRONE: Data-aware Low-rank Compression for Large NLP Models »
Patrick Chen · Hsiang-Fu Yu · Inderjit Dhillon · Cho-Jui Hsieh -
2021 Poster: DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification »
Yongming Rao · Wenliang Zhao · Benlin Liu · Jiwen Lu · Jie Zhou · Cho-Jui Hsieh -
2021 Poster: Fast Certified Robust Training with Short Warmup »
Zhouxing Shi · Yihan Wang · Huan Zhang · Jinfeng Yi · Cho-Jui Hsieh -
2020 Poster: Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond »
Kaidi Xu · Zhouxing Shi · Huan Zhang · Yihan Wang · Kai-Wei Chang · Minlie Huang · Bhavya Kailkhura · Xue Lin · Cho-Jui Hsieh -
2020 Poster: Provably Robust Metric Learning »
Lu Wang · Xuanqing Liu · Jinfeng Yi · Yuan Jiang · Cho-Jui Hsieh -
2020 Poster: Elastic-InfoGAN: Unsupervised Disentangled Representation Learning in Class-Imbalanced Data »
Utkarsh Ojha · Krishna Kumar Singh · Cho-Jui Hsieh · Yong Jae Lee -
2020 Poster: Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations »
Huan Zhang · Hongge Chen · Chaowei Xiao · Bo Li · Mingyan Liu · Duane Boning · Cho-Jui Hsieh -
2020 Spotlight: Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations »
Huan Zhang · Hongge Chen · Chaowei Xiao · Bo Li · Mingyan Liu · Duane Boning · Cho-Jui Hsieh -
2020 Poster: An Efficient Adversarial Attack for Tree Ensembles »
Chong Zhang · Huan Zhang · Cho-Jui Hsieh -
2020 Poster: Multi-Stage Influence Function »
Hongge Chen · Si Si · Yang Li · Ciprian Chelba · Sanjiv Kumar · Duane Boning · Cho-Jui Hsieh -
2019 Poster: Stochastic Shared Embeddings: Data-driven Regularization of Embedding Layers »
Liwei Wu · Shuqing Li · Cho-Jui Hsieh · James Sharpnack -
2019 Poster: A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks »
Hadi Salman · Greg Yang · Huan Zhang · Cho-Jui Hsieh · Pengchuan Zhang -
2019 Poster: Robustness Verification of Tree-based Models »
Hongge Chen · Huan Zhang · Si Si · Yang Li · Duane Boning · Cho-Jui Hsieh -
2019 Poster: Convergence of Adversarial Training in Overparametrized Neural Networks »
Ruiqi Gao · Tianle Cai · Haochuan Li · Cho-Jui Hsieh · Liwei Wang · Jason Lee -
2019 Spotlight: Convergence of Adversarial Training in Overparametrized Neural Networks »
Ruiqi Gao · Tianle Cai · Haochuan Li · Cho-Jui Hsieh · Liwei Wang · Jason Lee -
2019 Poster: A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning »
Xuanqing Liu · Si Si · Jerry Zhu · Yang Li · Cho-Jui Hsieh