Timezone: »

Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning
Andrea Zanette · Martin J Wainwright · Emma Brunskill

Thu Dec 09 08:30 AM -- 10:00 AM (PST) @ None #None

Actor-critic methods are widely used in offline reinforcement learningpractice, but are not so well-understood theoretically. We propose a newoffline actor-critic algorithm that naturally incorporates the pessimism principle, leading to several key advantages compared to the state of the art. The algorithm can operate when the Bellman evaluation operator is closed with respect to the action value function of the actor's policies; this is a more general setting than the low-rank MDP model. Despite the added generality, the procedure is computationally tractable as it involves the solution of a sequence of second-order programs.We prove an upper bound on the suboptimality gap of the policy returned by the procedure that depends on the data coverage of any arbitrary, possibly data dependent comparator policy.The achievable guarantee is complemented with a minimax lower bound that is matching up to logarithmic factors.

Author Information

Andrea Zanette (Stanford University)
Martin J Wainwright (UC Berkeley)
Emma Brunskill (Stanford University)

More from the Same Authors