Skip to yearly menu bar Skip to main content


Poster

Near-Optimal Dynamic Regret for Adversarial Linear Mixture MDPs

Long-Fei Li · Peng Zhao · Zhi-Hua Zhou

West Ballroom A-D #6109
[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: We study reinforcement learning in episodic linear mixture MDPs with unknown transition, adversarial rewards with full-information feedback, adopting *dynamic regret* as the performance measure. We start with a comprehensive analysis of the strengths and weaknesses of the two popular methods for adversarial MDPs: policy-based and occupancy-measure-based methods. Our findings indicate that while policy-based methods can deal with unknown transitions effectively, they face challenges in handling non-stationary environments. In contrast, occupancy-measure-based methods are effective in addressing non-stationary environments but encounter difficulties with unknown transitions. Building on these insights, we propose a novel algorithm that combines the benefits of both methods. Specifically, our algorithm employs (i) an occupancy-measure-based global optimization with a two-layer structure to deal with non-stationary environments; and (ii) a policy-based variance-aware value-targeted regression to handle the unknown transition. We bridge the two parts through a new conversion. We show our algorithm enjoys an $\widetilde{\mathcal{O}}(d \sqrt{H^3 K} + \sqrt{HK(H + \bar{P}_K)})$ dynamic regret, where $d$ is the feature mapping dimension, $H$ is the episode length, $K$ is the number of episodes, $\bar{P}_K$ is the non-stationarity measure. We show it is *minimax optimal* up to logarithmic factors by establishing a matching lower bound. To our knowledge, this is the **first** work that achieves the **near-optimal** dynamic regret for adversarial linear mixture MDPs.

Live content is unavailable. Log in and register to view live content