Workshop
|
|
Policy Resilience to Environment Poisoning Attack on Reinforcement Learning
Hang Xu · Zinovi Rabinovich
|
|
Workshop
|
|
Few-Shot Transferable Robust Representation Learning via Bilevel Attacks
Minseon Kim · Hyeonjeong Ha · Sung Ju Hwang
|
|
Workshop
|
|
Certifiable Robustness Against Patch Attacks Using an ERM Oracle
Kevin Stangl · Avrim Blum · Omar Montasser · Saba Ahmadi
|
|
Workshop
|
|
c-MBA: Adversarial Attack for Cooperative MARL Using Learned Dynamics Model
Nhan H Pham · Lam Nguyen · Jie Chen · Thanh Lam Hoang · Subhro Das · Lily Weng
|
|
Workshop
|
|
Plausible Adversarial Attacks on Direct Parameter Inference Models in Astrophysics
Benjamin Horowitz · Peter Melchior
|
|
Workshop
|
Sat 6:30
|
Spotlight: Imperceptible Adversarial Attacks on Discrete-Time Dynamic Graph Models
Kartik Sharma · Rakshit Trivedi · Rohit Sridhar · Srijan Kumar
|
|
Workshop
|
Sat 7:30
|
Spotlight 1 - Elre Talea Oldewage: Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners
Elre Oldewage
|
|
Workshop
|
Fri 7:45
|
Contributed Talk: Controllable Attack and Improved Adversarial Training in Multi-Agent Reinforcement Learning
Xiangyu Liu · Souradip Chakraborty · Furong Huang
|
|
Workshop
|
|
Few-shot Backdoor Attacks via Neural Tangent Kernels
Jonathan Hayase · Sewoong Oh
|
|
Workshop
|
|
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Jimmy Di · Jack Douglas · Jayadev Acharya · Gautam Kamath · Ayush Sekhari
|
|
Poster
|
Thu 9:00
|
[Re] Exacerbating Algorithmic Bias through Fairness Attacks
Matteo Tafuro · Andrea Lombardo · Tin Hadži Veljković · Lasse Becker-Czarnetzki
|
|
Workshop
|
|
Model and Method: Training-Time Attack for Cooperative Multi-Agent Reinforcement Learning
Siyang Wu · Tonghan Wang · Xiaoran Wu · Jingfeng ZHANG · Yujing Hu · Changjie Fan · Chongjie Zhang
|
|