Timezone: »

Inverse Reinforcement Learning in a Continuous State Space with Formal Guarantees
Gregory Dexter · Kevin Bello · Jean Honorio

Thu Dec 09 04:30 PM -- 06:00 PM (PST) @ None #None

Inverse Reinforcement Learning (IRL) is the problem of finding a reward function which describes observed/known expert behavior. The IRL setting is remarkably useful for automated control, in situations where the reward function is difficult to specify manually or as a means to extract agent preference. In this work, we provide a new IRL algorithm for the continuous state space setting with unknown transition dynamics by modeling the system using a basis of orthonormal functions. Moreover, we provide a proof of correctness and formal guarantees on the sample and time complexity of our algorithm. Finally, we present synthetic experiments to corroborate our theoretical guarantees.

Author Information

Gregory Dexter (Purdue University)
Kevin Bello (University of Chicago & Carnegie Mellon University)
Jean Honorio (Purdue University)

More from the Same Authors