Timezone: »

Learning to Follow Instructions in Text-Based Games
Mathieu Tuli · Andrew Li · Pashootan Vaezipoor · Toryn Klassen · Scott Sanner · Sheila McIlraith

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #737

Text-based games present a unique class of sequential decision making problem in which agents interact with a partially observable, simulated environment via actions and observations conveyed through natural language. Such observations typically include instructions that, in a reinforcement learning (RL) setting, can directly or indirectly guide a player towards completing reward-worthy tasks. In this work, we study the ability of RL agents to follow such instructions. We conduct experiments that show that the performance of state-of-the-art text-based game agents is largely unaffected by the presence or absence of such instructions, and that these agents are typically unable to execute tasks to completion. To further study and address the task of instruction following, we equip RL agents with an internal structured representation of natural language instructions in the form of Linear Temporal Logic (LTL), a formal language that is increasingly used for temporally extended reward specification in RL. Our framework both supports and highlights the benefit of understanding the temporal semantics of instructions and in measuring progress towards achievement of such a temporally extended behaviour. Experiments with 500+ games in TextWorld demonstrate the superior performance of our approach.

Author Information

Mathieu Tuli (University of Toronto and Vector Institute)
Andrew Li (University of Toronto)
Andrew Li

I am a second-year PhD student in Computer Science at the University of Toronto and the Vector Institute for Artificial Intelligence, supervised by Sheila McIlraith. My research interests lie at the intersection of Machine Learning (particularly Reinforcement Learning), AI Planning, and Knowledge Representation & Reasoning. I aim to develop AI which learns over a long lifetime by acquiring knowledge from its interactions with the world, abstracting knowledge into generalizable concepts, and reasoning at a high-level to robustly handle new situations.

Pashootan Vaezipoor (University of Toronto)

Working in the intersection of Machine Learning and Symbolic AI. Currently working on improvement of SAT solvers via Reinforcement Learning.

Toryn Klassen (University of Toronto)
Scott Sanner (University of Toronto)
Sheila McIlraith (University of Toronto and Vector Institute)

More from the Same Authors