Timezone: »

Supported Policy Optimization for Offline Reinforcement Learning
Jialong Wu · Haixu Wu · Zihan Qiu · Jianmin Wang · Mingsheng Long


Policy constraint methods to offline reinforcement learning (RL) typically utilize parameterization or regularization that constrains the policy to perform actions within the support set of the behavior policy. The elaborative designs of parameterization methods usually intrude into the policy networks, which may bring extra inference cost and cannot take full advantage of well-established online methods. Regularization methods reduce the divergence between the learned policy and the behavior policy, which may mismatch the inherent density-based definition of support set thereby failing to avoid the out-of-distribution actions effectively. This paper presents Supported Policy OpTimization (SPOT), which is directly derived from the theoretical formalization of the density-based support constraint. SPOT adopts a VAE-based density estimator to explicitly model the support set of behavior policy and presents a simple but effective density-based regularization term, which can be plugged non-intrusively into off-the-shelf off-policy RL algorithms. SPOT achieves the state-of-the-art performance on standard benchmarks for offline RL. Benefiting from the pluggable design, offline pretrained models from SPOT can also be applied to perform online fine-tuning seamlessly.

Author Information

Jialong Wu (School of Software, Tsinghua University)
Haixu Wu (Tsinghua University)
Zihan Qiu (IIIS, Tsinghua University)
Jianmin Wang (Tsinghua University)
Mingsheng Long (Tsinghua University)

More from the Same Authors