Skip to yearly menu bar Skip to main content


Poster

Achieving Tractable Minimax Optimal Regret in Average Reward MDPs

Victor Boone · Zihan Zhang

East Exhibit Hall A-C #4711
[ ]
[ Paper [ Poster [ OpenReview
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: In recent years, significant attention has been directed towards learning average-reward Markov Decision Processes (MDPs).However, existing algorithms either suffer from sub-optimal regret guarantees or computational inefficiencies.In this paper, we present the first *tractable* algorithm with minimax optimal regret of O(sp(h)SATlog(SAT)) where sp(h) is the span of the optimal bias function h, S×A is the size of the state-action space and T the number of learning steps. Remarkably, our algorithm does not require prior information on sp(h). Our algorithm relies on a novel subroutine, **P**rojected **M**itigated **E**xtended **V**alue **I**teration (PMEVI), to compute bias-constrained optimal policies efficiently. This subroutine can be applied to various previous algorithms to obtain improved regret bounds.

Chat is not available.