Timezone: »
We consider the safe reinforcement learning (RL) problem of maximizing utility with extremely low constraint violation rates. Assuming no prior knowledge or pre-training of the environment safety model given a task, an agent has to learn, via exploration, which states and actions are safe. A popular approach in this line of research is to combine a model-free RL algorithm with the Lagrangian method to adjust the weight of the constraint reward relative to the utility reward dynamically. It relies on a single policy to handle the conflict between utility and constraint rewards, which is often challenging. We present SEditor, a two-policy approach that learns a safety editor policy transforming potentially unsafe actions proposed by a utility maximizer policy into safe ones. The safety editor is trained to maximize the constraint reward while minimizing a hinge loss of the utility state-action values before and after an action is edited. SEditor extends existing safety layer designs that assume simplified safety models, to general safe RL scenarios where the safety model can in theory be arbitrarily complex. As a first-order method, it is easy to implement and efficient for both inference and training. On 12 Safety Gym tasks and 2 safe racing tasks, SEditor obtains much a higher overall safety-weighted-utility (SWU) score than the baselines, and demonstrates outstanding utility performance with constraint violation rates as low as once per 2k time steps, even in obstacle-dense environments. On some tasks, this low violation rate is up to 200 times lower than that of an unconstrained RL method with similar utility performance. Code is available at https://github.com/hnyu/seditor.
Author Information
Haonan Yu (Horizon Robotics)
Wei Xu (Horizon Robotics)
Haichao Zhang (Horizon Robotics)
More from the Same Authors
-
2022 Poster: Society of Agents: Regret Bounds of Concurrent Thompson Sampling »
Yan Chen · Perry Dong · Qinxun Bai · Maria Dimakopoulou · Wei Xu · Zhengyuan Zhou -
2022 Poster: PaCo: Parameter-Compositional Multi-task Reinforcement Learning »
Lingfeng Sun · Haichao Zhang · Wei Xu · Masayoshi TOMIZUKA -
2021 Poster: TAAC: Temporally Abstract Actor-Critic for Continuous Control »
Haonan Yu · Wei Xu · Haichao Zhang -
2019 Poster: Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training »
Haichao Zhang · Jianyu Wang -
2018 Poster: Adversarial Text Generation via Feature-Mover's Distance »
Liqun Chen · Shuyang Dai · Chenyang Tao · Haichao Zhang · Zhe Gan · Dinghan Shen · Yizhe Zhang · Guoyin Wang · Dinghan Shen · Lawrence Carin -
2017 : Break + Poster (1) »
Devendra Singh Chaplot · CHIH-YAO MA · Simon Brodeur · Eri Matsuo · Ichiro Kobayashi · Seitaro Shinagawa · Koichiro Yoshino · Yuhong Guo · Ben Murdoch · Kanthashree Mysore Sathyendra · Daniel Ricks · Haichao Zhang · Joshua Peterson · Li Zhang · Mircea Mironenco · Peter Anderson · Mark Johnson · Kang Min Yoo · Guntis Barzdins · Ahmed H Zaidi · Martin Andrews · Sam Witteveen · SUBBAREDDY OOTA · Prashanth Vijayaraghavan · Ke Wang · Yan Zhu · Renars Liepins · Max Quinn · Amit Raj · Vincent Cartillier · Eric Chu · Ethan Caballero · Fritz Obermeyer -
2014 Poster: Scale Adaptive Blind Deblurring »
Haichao Zhang · Jianchao Yang -
2013 Poster: Non-Uniform Camera Shake Removal Using a Spatially-Adaptive Sparse Penalty »
Haichao Zhang · David Wipf -
2013 Oral: Non-Uniform Camera Shake Removal Using a Spatially-Adaptive Sparse Penalty »
Haichao Zhang · David Wipf