Timezone: »

 
Poster
Critic Regularized Regression
Ziyu Wang · Alexander Novikov · Konrad Zolna · Josh Merel · Jost Tobias Springenberg · Scott Reed · Bobak Shahriari · Noah Siegel · Caglar Gulcehre · Nicolas Heess · Nando de Freitas

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1372

Offline reinforcement learning (RL), also known as batch RL, offers the prospect of policy optimization from large pre-recorded datasets without online environment interaction. It addresses challenges with regard to the cost of data collection and safety, both of which are particularly pertinent to real-world applications of RL. Unfortunately, most off-policy algorithms perform poorly when learning from a fixed dataset. In this paper, we propose a novel offline RL algorithm to learn policies from data using a form of critic-regularized regression (CRR). We find that CRR performs surprisingly well and scales to tasks with high-dimensional state and action spaces -- outperforming several state-of-the-art offline RL algorithms by a significant margin on a wide range of benchmark tasks.

Author Information

Ziyu Wang (Google Brain)
Alexander Novikov (DeepMind)
Konrad Zolna (DeepMind)
Josh Merel (DeepMind)
Jost Tobias Springenberg (DeepMind)
Scott Reed (Google DeepMind)
Bobak Shahriari (Deepmind)
Noah Siegel (DeepMind)
Caglar Gulcehre (DeepMind)
Nicolas Heess (Google DeepMind)
Nando de Freitas (DeepMind)

More from the Same Authors