Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Progress and Challenges in Building Trustworthy Embodied AI

Characterising the Robustness of Reinforcement Learning for Continuous Control using Disturbance Injection

Catherine Glossop · Jacopo Panerati · Amrit Krishnan · Zhaocong Yuan · Angela Schoellig

Keywords: [ Robotics ] [ robust control ] [ safety ] [ Reinforcement Learning ] [ Benchmarks ] [ continuous control ]


Abstract:

In this study, we leverage the deliberate and systematic fault-injection capabilities of an open-source benchmark suite to perform a series of experiments on state-of-the-art deep and robust reinforcement learning algorithms.We aim to benchmark robustness in the context of continuous action spaces---crucial for deployment in robot control.We find that robustness is more prominent for action disturbances than it is for disturbances to observations and dynamics. We also observe that state-of-the-art approaches that are not explicitly designed to improve robustness perform at a level comparable to that achieved by those that are.Our study and results are intended to provide insight into the current state of safe and robust reinforcement learning and a foundation for the advancement of the field, in particular, for deployment in robotic systems.NOTE: We plan to submit a subset of our results in a shorter 4-page version of this paper to the ``NeurIPS 2022 Workshop on Distribution Shifts (DistShift)''. DistShift does NOT have proceedings and will be held on a different date (Dec. 3) than TEA.

Chat is not available.