Timezone: »

Importance of Empirical Sample Complexity Analysis for Offline Reinforcement Learning
Samin Yeasar Arnob · Riashat Islam · Doina Precup

We hypothesize that empirically studying the sample complexity of offline reinforcement learning (RL) is crucial for the practical applications of RL in the real world. Several recent works have demonstrated the ability to learn policies directly from offline data. In this work, we ask the question of the dependency on the number of samples for learning from offline data. Our objective is to emphasize that studying sample complexity for offline RL is important, and is an indicator of the usefulness of existing offline algorithms. We propose an evaluation approach for sample complexity analysis of offline RL.

Author Information

Samin Yeasar Arnob (McGill University)
Riashat Islam (MILA/McGill)
Doina Precup (McGill University / Mila / DeepMind Montreal)

More from the Same Authors