Timezone: »
Trajectory prediction is essential for autonomous vehicles(AVs) to plan correct and safe driving behaviors. While many prior works aim to achieve higher prediction accuracy, few studies the adversarial robustness of their methods. To bridge this gap, we propose to study the adversarial robustness of data-driven trajectory prediction systems. We devise an optimization-based adversarial attack framework that leverages a carefully-designed differentiable dynamic model to generate realistic adversarial trajectories. Empirically, we benchmark the adversarial robustness of state-of-the-art prediction models and show that our attack increases the prediction error for both general metrics and planning-aware metrics by more than 50% and 37%. We also show that our attackcan lead an AV to drive off-road or collide into other vehicles in simulation. Finally, we demonstrate how to mitigate the adversarial attacks using an adversarial training scheme.
Author Information
Yulong Cao (University of Michigan)
Chaowei Xiao (ASU/NVIDIA)
I am Chaowei Xiao, a third year PhD student in CSE Department, University of Michigan, Ann Arbor. My advisor is Professor Mingyan Liu . I obtained my bachelor's degree in School of Software from Tsinghua University in 2015, advised by Professor Yunhao Liu, Professor Zheng Yang and Dr. Lei Yang. I was also a visiting student at UC Berkeley in 2018, advised by Professor Dawn Song and Professor Bo Li. My research interest includes adversarial machine learning.
Anima Anandkumar (NVIDIA/Caltech)
Danfei Xu (Georgia Tech)
Marco Pavone (Stanford University)
More from the Same Authors
-
2021 : What Matters in Learning from Offline Human Demonstrations for Robot Manipulation »
Ajay Mandlekar · Danfei Xu · Josiah Wong · Chen Wang · Li Fei-Fei · Silvio Savarese · Yuke Zhu · Roberto Martín-Martín -
2022 : Retrieval-based Controllable Molecule Generation »
Jack Wang · Weili Nie · Zhuoran Qiao · Chaowei Xiao · Richard Baraniuk · Anima Anandkumar -
2022 : MoleculeCLIP: Learning Transferable Molecule Multi-Modality Models via Natural Language »
Shengchao Liu · Weili Nie · Chengpeng Wang · Jiarui Lu · Zhuoran Qiao · Ling Liu · Jian Tang · Anima Anandkumar · Chaowei Xiao -
2022 : Calibration of Large Neural Weather Models »
Andre Graubner · Kamyar Azizzadenesheli · Jaideep Pathak · Morteza Mardani · Mike Pritchard · Karthik Kashinath · Anima Anandkumar -
2022 : FourCastNet: A practical introduction to a state-of-the-art deep learning global weather emulator »
Jaideep Pathak · Shashank Subramanian · Peter Harrington · Thorsten Kurth · Andre Graubner · Morteza Mardani · David Hall · Karthik Kashinath · Anima Anandkumar -
2022 : Foundation Models for Semantic Novelty in Reinforcement Learning »
Tarun Gupta · Peter Karkus · Tong Che · Danfei Xu · Marco Pavone -
2022 : DiffStack: A Differentiable and Modular Control Stack for Autonomous Vehicles »
Peter Karkus · Boris Ivanovic · Shie Mannor · Marco Pavone -
2022 : Robust Trajectory Prediction against Adversarial Attacks »
Yulong Cao · Danfei Xu · Xinshuo Weng · Zhuoqing Morley Mao · Anima Anandkumar · Chaowei Xiao · Marco Pavone -
2022 : Conformal Semantic Keypoint Detection with Statistical Guarantees »
Heng Yang · Marco Pavone -
2022 : Expanding the Deployment Envelope of Behavior Prediction via Adaptive Meta-Learning »
Boris Ivanovic · James Harrison · Marco Pavone -
2022 : Conformal Semantic Keypoint Detection with Statistical Guarantees »
Heng Yang · Marco Pavone -
2022 : Invited Talk: Marco Pavone »
Marco Pavone -
2022 : Calibration of Large Neural Weather Models »
Andre Graubner · Kamyar Azizzadenesheli · Jaideep Pathak · Morteza Mardani · Mike Pritchard · Karthik Kashinath · Anima Anandkumar -
2022 Workshop: Trustworthy and Socially Responsible Machine Learning »
Huan Zhang · Linyi Li · Chaowei Xiao · J. Zico Kolter · Anima Anandkumar · Bo Li -
2022 Poster: Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models »
Manli Shu · Weili Nie · De-An Huang · Zhiding Yu · Tom Goldstein · Anima Anandkumar · Chaowei Xiao -
2022 Poster: Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models »
Boxin Wang · Wei Ping · Chaowei Xiao · Peng Xu · Mostofa Patwary · Mohammad Shoeybi · Bo Li · Anima Anandkumar · Bryan Catanzaro -
2021 Poster: Data Sharing and Compression for Cooperative Networked Control »
Jiangnan Cheng · Marco Pavone · Sachin Katti · Sandeep Chinchali · Ao Tang -
2021 Poster: Controllable and Compositional Generation with Latent-Space Energy-Based Models »
Weili Nie · Arash Vahdat · Anima Anandkumar -
2021 Poster: AugMax: Adversarial Composition of Random Augmentations for Robust Training »
Haotao Wang · Chaowei Xiao · Jean Kossaifi · Zhiding Yu · Anima Anandkumar · Zhangyang Wang -
2021 Poster: Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds »
Yujia Huang · Huan Zhang · Yuanyuan Shi · J. Zico Kolter · Anima Anandkumar -
2021 Poster: Coupled Segmentation and Edge Learning via Dynamic Graph Propagation »
Zhiding Yu · Rui Huang · Wonmin Byeon · Sifei Liu · Guilin Liu · Thomas Breuel · Anima Anandkumar · Jan Kautz -
2021 Poster: Long-Short Transformer: Efficient Transformers for Language and Vision »
Chen Zhu · Wei Ping · Chaowei Xiao · Mohammad Shoeybi · Tom Goldstein · Anima Anandkumar · Bryan Catanzaro -
2021 Poster: Adversarially Robust 3D Point Cloud Recognition Using Self-Supervisions »
Jiachen Sun · Yulong Cao · Christopher B Choy · Zhiding Yu · Anima Anandkumar · Zhuoqing Morley Mao · Chaowei Xiao -
2021 Poster: SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers »
Enze Xie · Wenhai Wang · Zhiding Yu · Anima Anandkumar · Jose M. Alvarez · Ping Luo -
2020 Poster: Continuous Meta-Learning without Tasks »
James Harrison · Apoorva Sharma · Chelsea Finn · Marco Pavone -
2020 Poster: Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations »
Huan Zhang · Hongge Chen · Chaowei Xiao · Bo Li · Mingyan Liu · Duane Boning · Cho-Jui Hsieh -
2020 Spotlight: Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations »
Huan Zhang · Hongge Chen · Chaowei Xiao · Bo Li · Mingyan Liu · Duane Boning · Cho-Jui Hsieh -
2020 Poster: Evidential Sparsification of Multimodal Latent Spaces in Conditional Variational Autoencoders »
Masha Itkina · Boris Ivanovic · Ransalu Senanayake · Mykel J Kochenderfer · Marco Pavone -
2019 : Marco Pavone: On Safe and Efficient Human-robot Interactions via Multi-modal Intent Modeling and Reachability-based Safety Assurance »
Marco Pavone -
2019 Poster: Regression Planning Networks »
Danfei Xu · Roberto Martín-Martín · De-An Huang · Yuke Zhu · Silvio Savarese · Li Fei-Fei -
2019 Poster: High-Dimensional Optimization in Adaptive Random Subspaces »
Jonathan Lacotte · Mert Pilanci · Marco Pavone -
2018 : Panel »
Yimeng Zhang · Alfredo Canziani · Marco Pavone · Dorsa Sadigh · Kurt Keutzer -
2018 : Invited Talk: Marco Pavone, Stanford »
Marco Pavone -
2015 Poster: Risk-Sensitive and Robust Decision-Making: a CVaR Optimization Approach »
Yinlam Chow · Aviv Tamar · Shie Mannor · Marco Pavone