Skip to yearly menu bar Skip to main content


Poster

Teaching Inverse Reinforcement Learners via Features and Demonstrations

Luis Haug · Sebastian Tschiatschek · Adish Singla

Room 517 AB #167

Keywords: [ Model-Based RL ] [ Planning ]


Abstract:

Learning near-optimal behaviour from an expert's demonstrations typically relies on the assumption that the learner knows the features that the true reward function depends on. In this paper, we study the problem of learning from demonstrations in the setting where this is not the case, i.e., where there is a mismatch between the worldviews of the learner and the expert. We introduce a natural quantity, the teaching risk, which measures the potential suboptimality of policies that look optimal to the learner in this setting. We show that bounds on the teaching risk guarantee that the learner is able to find a near-optimal policy using standard algorithms based on inverse reinforcement learning. Based on these findings, we suggest a teaching scheme in which the expert can decrease the teaching risk by updating the learner's worldview, and thus ultimately enable her to find a near-optimal policy.

Live content is unavailable. Log in and register to view live content