Skip to yearly menu bar Skip to main content


Poster

MAP Inference for Bayesian Inverse Reinforcement Learning

Jaedeug Choi · Kee-Eung Kim


Abstract:

The difficulty in inverse reinforcement learning (IRL) arises in choosing the best reward function since there are typically an infinite number of reward functions that yield the given behaviour data as optimal. Using a Bayesian framework, we address this challenge by using the maximum a posteriori (MAP) estimation for the reward function, and show that most of the previous IRL algorithms can be modeled into our framework. We also present a gradient method for the MAP estimation based on the (sub)differentiability of the posterior distribution. We show the effectiveness of our approach by comparing the performance of the proposed method to those of the previous algorithms.

Live content is unavailable. Log in and register to view live content