Timezone: »
Poster
MAP Inference for Bayesian Inverse Reinforcement Learning
Jaedeug Choi · Kee-Eung Kim
The difficulty in inverse reinforcement learning (IRL) arises in choosing the best reward function since there are typically an infinite number of reward functions that yield the given behaviour data as optimal. Using a Bayesian framework, we address this challenge by using the maximum a posteriori (MAP) estimation for the reward function, and show that most of the previous IRL algorithms can be modeled into our framework. We also present a gradient method for the MAP estimation based on the (sub)differentiability of the posterior distribution. We show the effectiveness of our approach by comparing the performance of the proposed method to those of the previous algorithms.
Author Information
Jaedeug Choi (KAIST)
Kee-Eung Kim (KAIST)
More from the Same Authors
-
2018 Poster: A Bayesian Approach to Generative Adversarial Imitation Learning »
Wonseok Jeon · Seokin Seo · Kee-Eung Kim -
2018 Spotlight: A Bayesian Approach to Generative Adversarial Imitation Learning »
Wonseok Jeon · Seokin Seo · Kee-Eung Kim -
2018 Poster: Monte-Carlo Tree Search for Constrained POMDPs »
Jongmin Lee · Geon-Hyeong Kim · Pascal Poupart · Kee-Eung Kim -
2017 Poster: Generative Local Metric Learning for Kernel Regression »
Yung-Kyun Noh · Masashi Sugiyama · Kee-Eung Kim · Frank Park · Daniel Lee -
2012 Poster: Cost-Sensitive Exploration in Bayesian Reinforcement Learning »
Dongho Kim · Kee-Eung Kim · Pascal Poupart -
2012 Poster: Nonparametric Bayesian Inverse Reinforcement Learning for Multiple Reward Functions »
Jaedeug Choi · Kee-Eung Kim