Reward Machines (RMs), originally proposed for specifying problems in Reinforcement Learning (RL), provide a structured, automata-based representation of a reward function that allows an agent to decompose problems into subproblems that can be efficiently learned using off-policy learning. Here we show that RMs can be learned from experience, instead of being specified by the user, and that the resulting problem decomposition can be used to effectively solve partially observable RL problems. We pose the task of learning RMs as a discrete optimization problem where the objective is to find an RM that decomposes the problem into a set of subproblems such that the combination of their optimal memoryless policies is an optimal policy for the original problem. We show the effectiveness of this approach on three partially observable domains, where it significantly outperforms A3C, PPO, and ACER, and discuss its advantages, limitations, and broader potential.
Rodrigo Toro Icarte (University of Toronto and Vector Institute)
I am a Ph.D. student in the knowledge representation group at the University of Toronto. I am also a member of the Canadian Artificial Intelligence Association and the Vector Institute. My supervisor is Sheila McIlraith. I did my undergrad in Computer Engineering and MSc in Computer Science at Pontificia Universidad Catolica de Chile (PUC). My master's degree was co-supervised by Alvaro Soto and Jorge Baier. While I was at PUC, I taught the undergraduate course "Introduction to Computer Programming Languages."
Ethan Waldie (University of Toronto & Palantir Technologies)
Toryn Klassen (University of Toronto)
Rick Valenzano (Element AI)
Margarita Castro (University of Toronto)
Sheila McIlraith (University of Toronto)
Related Events (a corresponding poster, oral, or spotlight)
2019 Spotlight: Learning Reward Machines for Partially Observable Reinforcement Learning »
Wed Dec 11th 10:40 -- 10:45 AM Room West Exhibition Hall A