Timezone: »
Poster
Nonparametric Bayesian Inverse Reinforcement Learning for Multiple Reward Functions
Jaedeug Choi · Kee-Eung Kim
Wed Dec 05 07:00 PM -- 12:00 AM (PST) @ Harrah’s Special Events Center 2nd Floor
We present a nonparametric Bayesian approach to inverse reinforcement learning (IRL) for multiple reward functions. Most previous IRL algorithms assume that the behaviour data is obtained from an agent who is optimizing a single reward function, but this assumption is hard to be met in practice. Our approach is based on integrating the Dirichlet process mixture model into Bayesian IRL. We provide an efficient Metropolis-Hastings sampling algorithm utilizing the gradient of the posterior to estimate the underlying reward functions, and demonstrate that our approach outperforms the previous ones via experiments on a number of problem domains.
Author Information
Jaedeug Choi (KAIST)
Kee-Eung Kim (KAIST)
More from the Same Authors
-
2018 Poster: A Bayesian Approach to Generative Adversarial Imitation Learning »
Wonseok Jeon · Seokin Seo · Kee-Eung Kim -
2018 Spotlight: A Bayesian Approach to Generative Adversarial Imitation Learning »
Wonseok Jeon · Seokin Seo · Kee-Eung Kim -
2018 Poster: Monte-Carlo Tree Search for Constrained POMDPs »
Jongmin Lee · Geon-Hyeong Kim · Pascal Poupart · Kee-Eung Kim -
2017 Poster: Generative Local Metric Learning for Kernel Regression »
Yung-Kyun Noh · Masashi Sugiyama · Kee-Eung Kim · Frank Park · Daniel Lee -
2012 Poster: Cost-Sensitive Exploration in Bayesian Reinforcement Learning »
Dongho Kim · Kee-Eung Kim · Pascal Poupart -
2011 Poster: MAP Inference for Bayesian Inverse Reinforcement Learning »
Jaedeug Choi · Kee-Eung Kim