Skip to yearly menu bar Skip to main content


Poster

Occam's razor is insufficient to infer the preferences of irrational agents

Stuart Armstrong · Sören Mindermann

Room 517 AB #125

Keywords: [ Markov Decision Processes ] [ Reinforcement Learning ]


Abstract:

Inverse reinforcement learning (IRL) attempts to infer human rewards or preferences from observed behavior. Since human planning systematically deviates from rationality, several approaches have been tried to account for specific human shortcomings. However, the general problem of inferring the reward function of an agent of unknown rationality has received little attention. Unlike the well-known ambiguity problems in IRL, this one is practically relevant but cannot be resolved by observing the agent's policy in enough environments. This paper shows (1) that a No Free Lunch result implies it is impossible to uniquely decompose a policy into a planning algorithm and reward function, and (2) that even with a reasonable simplicity prior/Occam's razor on the set of decompositions, we cannot distinguish between the true decomposition and others that lead to high regret. To address this, we need simple `normative' assumptions, which cannot be deduced exclusively from observations.

Live content is unavailable. Log in and register to view live content