Skip to yearly menu bar Skip to main content


Poster

Inverse Reinforcement Learning in a Continuous State Space with Formal Guarantees

Gregory Dexter · Kevin Bello · Jean Honorio

Keywords: [ Reinforcement Learning and Planning ] [ Theory ]


Abstract:

Inverse Reinforcement Learning (IRL) is the problem of finding a reward function which describes observed/known expert behavior. The IRL setting is remarkably useful for automated control, in situations where the reward function is difficult to specify manually or as a means to extract agent preference. In this work, we provide a new IRL algorithm for the continuous state space setting with unknown transition dynamics by modeling the system using a basis of orthonormal functions. Moreover, we provide a proof of correctness and formal guarantees on the sample and time complexity of our algorithm. Finally, we present synthetic experiments to corroborate our theoretical guarantees.

Chat is not available.