Timezone: »

 
AMORE: A Model-based Framework for Improving Arbitrary Baseline Policies with Offline Data
Tengyang Xie · Mohak Bhardwaj · Nan Jiang · Ching-An Cheng
Event URL: https://openreview.net/forum?id=Dyh6UeiVMVB »

We propose a new model-based offline RL framework, called Adversarial Models for Offline Reinforcement Learning (AMORE), which can robustly learn policies to improve upon an arbitrary baseline policy regardless of data coverage. Based on the concept of relative pessimism, AMORE is designed to optimize for the worst-case relative performance when facing uncertainty. In theory, we prove that the learned policy of AMORE never degrades the performance of the baseline policy with any admissible hyperparameter, and can learn to compete with the best policy within data coverage when the hyperparameter is well tuned and the baseline policy is supported by the data. Such a robust policy improvement property makes AMORE especially suitable for building real-world learning systems, because in practice ensuring no performance degradation is imperative before considering any benefit learning can bring.

Author Information

Tengyang Xie (University of Illinois at Urbana-Champaign)
Mohak Bhardwaj (University of Washington)
Nan Jiang (University of Illinois at Urbana-Champaign)
Ching-An Cheng (Microsoft Research)

More from the Same Authors