Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Multi-Agent Security: Security as Key to AI Safety

Beyond Worst-case Attacks: Robust RL with Adaptive Defense via Non-dominated Policies

Xiangyu Liu · Chenghao Deng · Yanchao Sun · Yongyuan Liang · Furong Huang

Keywords: [ Robust reinforcement learning; beyond worst-case attack ]

[ ] [ Project Page ]
Sat 16 Dec 1:10 p.m. PST — 1:20 p.m. PST
 
presentation: Multi-Agent Security: Security as Key to AI Safety
Sat 16 Dec 7 a.m. PST — 3:30 p.m. PST

Abstract: Considerable focus has been directed towards ensuring that reinforcement learning (RL) policies are robust to adversarial attacks during test time. While current approaches are effective against strong attacks for potential worst-case scenarios, these methods often compromise performance in the absence of attacks or the presence of only weak attacks. To address this, we study policy robustness under the well-accepted state-adversarial attack model, extending our focus beyond merely worst-case attacks. We \textit{refine} the baseline policy class $\Pi$ prior to test time, aiming for efficient adaptation within a compact, finite policy class $\tilde{\Pi}$, which can resort to an adversarial bandit subroutine. We then propose a novel training-time algorithm to iteratively discover \textit{non-dominated policies}, forming a near-optimal and minimal $\tilde{\Pi}$. Empirical validation on the Mujoco corroborates the superiority of our approach in terms of natural and robust performance, as well as adaptability to various attack scenarios.

Chat is not available.