Spotlight Talk
Workshop: Cooperative AI

Spotlight Talk: Interactive Inverse Reinforcement Learning for Cooperative Games

Thomas Kleine Büning · Anne-Marie George · Christos Dimitrakakis


We study the problem of designing AI agents that cooperate effectively with a potentially suboptimal partner while having no access to the joint reward function. This problem is modeled as a cooperative episodic two-agent Markov Decision Process. We assume control over only the first of the two agents in a Stackelberg formulation of the game, where the second agent is acting so as to maximise expected utility given the first agent's policy. How should the first agent act so it can learn the joint reward function as quickly as possible, and so that the joint policy is as close to optimal as possible? In this paper, we analyse how knowledge about the reward function can be gained. We show that when the learning agent's policies have a significant effect on the transition function, the reward function can be learned efficiently.