Timezone: »
Misspecification in Inverse Reinforcement Learning
Joar Skalse · Alessandro Abate
The aim of Inverse Reinforcement Learning (IRL) is to infer a reward function $R$ from a policy $\pi$. To do this, we need a model of how $\pi$ relates to $R$. In the current literature, the most common models are \emph{optimality}, \emph{Boltzmann rationality}, and \emph{causal entropy maximisation}. One of the primary motivations behind IRL is to infer human preferences from human behaviour. However, the true relationship between human preferences and human behaviour is much more complex than any of the models currently used in IRL. This means that they are \emph{misspecified}, which raises the worry that they might lead to unsound inferences if applied to real-world data.In this paper, we provide a mathematical analysis of how robust different IRL models are to misspecification, and answer precisely how the demonstrator policy may differ from each of the standard models before that model leads to faulty inferences about the reward function $R$. We also introduce a framework for reasoning about misspecification in IRL, together with formal tools that can be used to easily derive the misspecification robustness of new IRL models.
Author Information
Joar Skalse (University of Oxford)
Alessandro Abate (University of Oxford)
More from the Same Authors
-
2021 Spotlight: Reinforcement Learning in Newcomblike Environments »
James Bell · Linda Linsefors · Caspar Oesterheld · Joar Skalse -
2022 : The Reward Hypothesis is False »
Joar Skalse · Alessandro Abate -
2022 : A general framework for reward function distances »
Erik Jenner · Joar Skalse · Adam Gleave -
2022 : All’s Well That Ends Well: Avoiding Side Effects with Distance-Impact Penalties »
Charlie Griffin · Joar Skalse · Lewis Hammond · Alessandro Abate -
2022 Poster: Defining and Characterizing Reward Gaming »
Joar Skalse · Nikolaus Howe · Dmitrii Krasheninnikov · David Krueger -
2021 Poster: Reinforcement Learning in Newcomblike Environments »
James Bell · Linda Linsefors · Caspar Oesterheld · Joar Skalse -
2020 Poster: A Randomized Algorithm to Reduce the Support of Discrete Measures »
Francesco Cosentino · Harald Oberhauser · Alessandro Abate -
2020 Spotlight: A Randomized Algorithm to Reduce the Support of Discrete Measures »
Francesco Cosentino · Harald Oberhauser · Alessandro Abate