Reinforcement Learning in Newcomblike Environments

James Bell · Linda Linsefors · Caspar Oesterheld · Joar Skalse

Keywords: [ Reinforcement Learning and Planning ] [ Theory ]

[ Abstract ]
[
Fri 10 Dec 8:30 a.m. PST — 10 a.m. PST

Spotlight presentation:

Abstract:

Newcomblike decision problems have been studied extensively in the decision theory literature, but they have so far been largely absent in the reinforcement learning literature. In this paper we study value-based reinforcement learning algorithms in the Newcomblike setting, and answer some of the fundamental theoretical questions about the behaviour of such algorithms in these environments. We show that a value-based reinforcement learning agent cannot converge to a policy that is not \emph{ratifiable}, i.e., does not only choose actions that are optimal given that policy. This gives us a powerful tool for reasoning about the limit behaviour of agents -- for example, it lets us show that there are Newcomblike environments in which a reinforcement learning agent cannot converge to any optimal policy. We show that a ratifiable policy always exists in our setting, but that there are cases in which a reinforcement learning agent normally cannot converge to it (and hence cannot converge at all). We also prove several results about the possible limit behaviours of agents in cases where they do not converge to any policy.

Chat is not available.