Skip to yearly menu bar Skip to main content


Poster

How does Inverse RL Scale to Large State Spaces? A Provably Efficient Approach

Filippo Lazzati · Mirco Mutti · Alberto Maria Metelli

West Ballroom A-D #6507
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

In online Inverse Reinforcement Learning (IRL), the learner can collect samples about the dynamics of the environment to improve itsestimate of the reward function. Since IRL suffers from identifiability issues, many theoretical works on online IRL focus on estimating the entire set of rewards that explain the demonstrations, named the feasible reward set. However, none of the algorithms available in literature can scale to problems with large state spaces. In this paper, we focus on the online IRL problem in Linear Markov DecisionProcesses (MDPs). We show that the structure offered by Linear MDPs is not sufficient for efficiently estimating the feasible set when the state space is large. As a consequence, we introduce the novel framework of rewards compatibility, which generalizes the notion of feasible set, and we develop CATY-IRL, a sample efficient algorithm whose complexity is independent of the size of the state space in Linear MDPs. When restricted to the tabular setting, we demonstrate that CATY-IRL is minimax optimal up to logarithmic factors. As a by-product, we show that Reward-Free Exploration (RFE) enjoys the same worst-case rate, improving over the state-of-the-art lower bound. Finally, we devise a unifying framework for IRL and RFE that may be of independent interest.

Live content is unavailable. Log in and register to view live content