Skip to yearly menu bar Skip to main content


Poster

Recursive Reinforcement Learning

Ernst Moritz Hahn · Mateo Perez · Sven Schewe · Fabio Somenzi · Ashutosh Trivedi · Dominik Wojtczak

Hall J (level 1) #803

Keywords: [ Reinforcement Learning ] [ Recursive Markov Decision Processes ] [ Probabilistic Context-Free Grammars ] [ Probabilistic Pushdown Automata ] [ Recursive State Machines ] [ Branching Processes ]


Abstract:

Recursion is the fundamental paradigm to finitely describe potentially infinite objects. As state-of-the-art reinforcement learning (RL) algorithms cannot directly reason about recursion, they must rely on the practitioner's ingenuity in designing a suitable "flat" representation of the environment. The resulting manual feature constructions and approximations are cumbersome and error-prone; their lack of transparency hampers scalability. To overcome these challenges, we develop RL algorithms capable of computing optimal policies in environments described as a collection of Markov decision processes (MDPs) that can recursively invoke one another. Each constituent MDP is characterized by several entry and exit points that correspond to input and output values of these invocations. These recursive MDPs (or RMDPs) are expressively equivalent to probabilistic pushdown systems (with call-stack playing the role of the pushdown stack), and can model probabilistic programs with recursive procedural calls. We introduce Recursive Q-learning---a model-free RL algorithm for RMDPs---and prove that it converges for finite, single-exit and deterministic multi-exit RMDPs under mild assumptions.

Chat is not available.