Poster
Safe Model-based Reinforcement Learning with Stability Guarantees
Felix Berkenkamp · Matteo Turchetta · Angela Schoellig · Andreas Krause
Pacific Ballroom #203
Keywords: [ Gaussian Processes ] [ Decision and Control ] [ Reinforcement Learning ] [ Model-Based RL ]
Reinforcement learning is a powerful paradigm for learning optimal policies from experimental data. However, to find optimal policies, most reinforcement learning algorithms explore all possible actions, which may be harmful for real-world systems. As a consequence, learning algorithms are rarely applied on safety-critical systems in the real world. In this paper, we present a learning algorithm that explicitly considers safety, defined in terms of stability guarantees. Specifically, we extend control-theoretic results on Lyapunov stability verification and show how to use statistical models of the dynamics to obtain high-performance control policies with provable stability certificates. Moreover, under additional regularity assumptions in terms of a Gaussian process prior, we prove that one can effectively and safely collect data in order to learn about the dynamics and thus both improve control performance and expand the safe region of the state space. In our experiments, we show how the resulting algorithm can safely optimize a neural network policy on a simulated inverted pendulum, without the pendulum ever falling down.
Live content is unavailable. Log in and register to view live content