Timezone: »
A deep reinforcement learning (DRL) agent observes its states through observations, which may contain natural measurement errors or adversarial noises. Since the observations deviate from the true states, they can mislead the agent into making suboptimal actions. Several works have shown this vulnerability via adversarial attacks, but how to improve the robustness of DRL under this setting has not been well studied. We show that naively applying existing techniques on improving robustness for classification tasks, like adversarial training, are ineffective for many RL tasks. We propose the state-adversarial Markov decision process (SA-MDP) to study the fundamental properties of this problem, and develop a theoretically principled policy regularization which can be applied to a large family of DRL algorithms, including deep deterministic policy gradient (DDPG), proximal policy optimization (PPO) and deep Q networks (DQN), for both discrete and continuous action control problems. We significantly improve the robustness of DDPG, PPO and DQN agents under a suite of strong white box adversarial attacks, including two new attacks of our own. Additionally, we find that a robust policy noticeably improves DRL performance in a number of environments.
Author Information
Huan Zhang (UCLA)
Hongge Chen (MIT)
Chaowei Xiao (University of Michigan, Ann Arbor)
I am Chaowei Xiao, a third year PhD student in CSE Department, University of Michigan, Ann Arbor. My advisor is Professor Mingyan Liu . I obtained my bachelor's degree in School of Software from Tsinghua University in 2015, advised by Professor Yunhao Liu, Professor Zheng Yang and Dr. Lei Yang. I was also a visiting student at UC Berkeley in 2018, advised by Professor Dawn Song and Professor Bo Li. My research interest includes adversarial machine learning.
Bo Li (UIUC)
Mingyan Liu (University of Michigan, Ann Arbor)
Mingyan Liu (M'00, SM'11, F'14) received her Ph.D. Degree in electrical engineering from the University of Maryland, College Park, in 2000. She is currently a professor with the Department of Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor, and the Peter and Evelyn Fuss Chair of Electrical and Computer Engineering. Her research interests are in optimal resource allocation, performance modeling, sequential decision and learning theory, game theory and incentive mechanisms, with applications to large-scale networked systems, cybersecurity and cyber risk quantification. She has served on the editorial boards of IEEE/ACM Trans. Networking, IEEE Trans. Mobile Computing, and ACM Trans. Sensor Networks. She is a Fellow of the IEEE and a member of the ACM.
Duane Boning (Massachusetts Institute of Technology)
Cho-Jui Hsieh (UCLA)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations »
Wed. Dec 9th 04:10 -- 04:20 PM Room Orals & Spotlights: Social/Adversarial Learning
More from the Same Authors
-
2021 : Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models »
Boxin Wang · Chejian Xu · Shuohang Wang · Zhe Gan · Yu Cheng · Jianfeng Gao · Ahmed Awadallah · Bo Li -
2021 : Certified Robustness for Free in Differentially Private Federated Learning »
Chulin Xie · Yunhui Long · Pin-Yu Chen · Krishnaram Kenthapadi · Bo Li -
2021 : RVFR: Robust Vertical Federated Learning via Feature Subspace Recovery »
Jing Liu · Chulin Xie · Krishnaram Kenthapadi · Sanmi Koyejo · Bo Li -
2021 : What Would Jiminy Cricket Do? Towards Agents That Behave Morally »
Dan Hendrycks · Mantas Mazeika · Andy Zou · Sahil Patel · Christine Zhu · Jesus Navarro · Dawn Song · Bo Li · Jacob Steinhardt -
2021 : Career and Life: Panel Discussion - Bo Li, Adriana Romero-Soriano, Devi Parikh, and Emily Denton »
Emily Denton · Devi Parikh · Bo Li · Adriana Romero -
2021 : Live Q&A with Bo Li »
Bo Li -
2021 : Invited talk – Trustworthy Machine Learning via Logic Inference, Bo Li »
Bo Li -
2021 Poster: Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification »
Shiqi Wang · Huan Zhang · Kaidi Xu · Xue Lin · Suman Jana · Cho-Jui Hsieh · J. Zico Kolter -
2021 : Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models »
Boxin Wang · Chejian Xu · Shuohang Wang · Zhe Gan · Yu Cheng · Jianfeng Gao · Ahmed Awadallah · Bo Li -
2021 Poster: Learnable Fourier Features for Multi-dimensional Spatial Positional Encoding »
Yang Li · Si Si · Gang Li · Cho-Jui Hsieh · Samy Bengio -
2021 Poster: Adjusting for Autocorrelated Errors in Neural Networks for Time Series »
Fan-Keng Sun · Chris Lang · Duane Boning -
2021 Poster: G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators »
Yunhui Long · Boxin Wang · Zhuolin Yang · Bhavya Kailkhura · Aston Zhang · Carl Gunter · Bo Li -
2021 Poster: Label Disentanglement in Partition-based Extreme Multilabel Classification »
Xuanqing Liu · Wei-Cheng Chang · Hsiang-Fu Yu · Cho-Jui Hsieh · Inderjit Dhillon -
2021 Poster: Anti-Backdoor Learning: Training Clean Models on Poisoned Data »
Yige Li · Xixiang Lyu · Nodens Koren · Lingjuan Lyu · Bo Li · Xingjun Ma -
2021 Poster: DRONE: Data-aware Low-rank Compression for Large NLP Models »
Patrick Chen · Hsiang-Fu Yu · Inderjit Dhillon · Cho-Jui Hsieh -
2021 Poster: Adversarial Attack Generation Empowered by Min-Max Optimization »
Jingkang Wang · Tianyun Zhang · Sijia Liu · Pin-Yu Chen · Jiacen Xu · Makan Fardad · Bo Li -
2021 : Reconnaissance Blind Chess + Q&A »
Ryan Gardner · Gino Perrotta · Corey Lowman · Casey Richardson · Andrew Newman · Jared Markowitz · Nathan Drenkow · Bart Paulhamus · Ashley J Llorens · Todd Neller · Raman Arora · Bo Li · Mykel J Kochenderfer -
2021 Poster: DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification »
Yongming Rao · Wenliang Zhao · Benlin Liu · Jiwen Lu · Jie Zhou · Cho-Jui Hsieh -
2021 Poster: AugMax: Adversarial Composition of Random Augmentations for Robust Training »
Haotao Wang · Chaowei Xiao · Jean Kossaifi · Zhiding Yu · Anima Anandkumar · Zhangyang Wang -
2021 Poster: Long-Short Transformer: Efficient Transformers for Language and Vision »
Chen Zhu · Wei Ping · Chaowei Xiao · Mohammad Shoeybi · Tom Goldstein · Anima Anandkumar · Bryan Catanzaro -
2021 Poster: Fast Certified Robust Training with Short Warmup »
Zhouxing Shi · Yihan Wang · Huan Zhang · Jinfeng Yi · Cho-Jui Hsieh -
2021 Poster: Adversarially Robust 3D Point Cloud Recognition Using Self-Supervisions »
Jiachen Sun · Yulong Cao · Christopher B Choy · Zhiding Yu · Anima Anandkumar · Zhuoqing Morley Mao · Chaowei Xiao -
2021 Poster: TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness »
Zhuolin Yang · Linyi Li · Xiaojun Xu · Shiliang Zuo · Qian Chen · Pan Zhou · Benjamin Rubinstein · Ce Zhang · Bo Li -
2020 Workshop: Workshop on Dataset Curation and Security »
Nathalie Baracaldo Angel · Yonatan Bisk · Avrim Blum · Michael Curry · John Dickerson · Micah Goldblum · Tom Goldstein · Bo Li · Avi Schwarzschild -
2020 Poster: Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond »
Kaidi Xu · Zhouxing Shi · Huan Zhang · Yihan Wang · Kai-Wei Chang · Minlie Huang · Bhavya Kailkhura · Xue Lin · Cho-Jui Hsieh -
2020 Poster: Provably Robust Metric Learning »
Lu Wang · Xuanqing Liu · Jinfeng Yi · Yuan Jiang · Cho-Jui Hsieh -
2020 Poster: Elastic-InfoGAN: Unsupervised Disentangled Representation Learning in Class-Imbalanced Data »
Utkarsh Ojha · Krishna Kumar Singh · Cho-Jui Hsieh · Yong Jae Lee -
2020 Poster: How do fair decisions fare in long-term qualification? »
Xueru Zhang · Ruibo Tu · Yang Liu · Mingyan Liu · Hedvig Kjellstrom · Kun Zhang · Cheng Zhang -
2020 Poster: An Efficient Adversarial Attack for Tree Ensembles »
Chong Zhang · Huan Zhang · Cho-Jui Hsieh -
2020 Poster: Multi-Stage Influence Function »
Hongge Chen · Si Si · Yang Li · Ciprian Chelba · Sanjiv Kumar · Duane Boning · Cho-Jui Hsieh -
2020 Poster: On Convergence of Nearest Neighbor Classifiers over Feature Transformations »
Luka Rimanic · Cedric Renggli · Bo Li · Ce Zhang -
2019 Poster: Stochastic Shared Embeddings: Data-driven Regularization of Embedding Layers »
Liwei Wu · Shuqing Li · Cho-Jui Hsieh · James Sharpnack -
2019 Poster: A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks »
Hadi Salman · Greg Yang · Huan Zhang · Cho-Jui Hsieh · Pengchuan Zhang -
2019 Poster: Group Retention when Using Machine Learning in Sequential Decision Making: the Interplay between User Dynamics and Fairness »
Xueru Zhang · Mohammad Mahdi Khalili · Cem Tekin · Mingyan Liu -
2019 Poster: Robustness Verification of Tree-based Models »
Hongge Chen · Huan Zhang · Si Si · Yang Li · Duane Boning · Cho-Jui Hsieh -
2019 Poster: Convergence of Adversarial Training in Overparametrized Neural Networks »
Ruiqi Gao · Tianle Cai · Haochuan Li · Cho-Jui Hsieh · Liwei Wang · Jason Lee -
2019 Spotlight: Convergence of Adversarial Training in Overparametrized Neural Networks »
Ruiqi Gao · Tianle Cai · Haochuan Li · Cho-Jui Hsieh · Liwei Wang · Jason Lee -
2019 Poster: A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning »
Xuanqing Liu · Si Si · Jerry Zhu · Yang Li · Cho-Jui Hsieh