Timezone: »

3rd Offline Reinforcement Learning Workshop: Offline RL as a "Launchpad"
Aviral Kumar · Rishabh Agarwal · Aravind Rajeswaran · Wenxuan Zhou · George Tucker · Doina Precup · Aviral Kumar

Fri Dec 02 06:20 AM -- 03:30 PM (PST) @ Room 286
Event URL: https://offline-rl-neurips.github.io/2022/ »

While offline RL focuses on learning solely from fixed datasets, one of the main learning points from the previous edition of offline RL workshop was that large-scale RL applications typically want to use offline RL as part of a bigger system as opposed to being the end-goal in itself. Thus, we propose to shift the focus from algorithm design and offline RL applications to how offline RL can be a launchpad , i.e., a tool or a starting point, for solving challenges in sequential decision-making such as exploration, generalization, transfer, safety, and adaptation. Particularly, we are interested in studying and discussing methods for learning expressive models, policies, skills and value functions from data that can help us make progress towards efficiently tackling these challenges, which are otherwise often intractable.

Submission site: https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/Offline_RL. The submission deadline is September 25, 2022 (Anywhere on Earth). Please refer to the submission page for more details.

Author Information

Aviral Kumar (UC Berkeley)
Rishabh Agarwal (Google Research, Brain Team)

My research work mainly revolves around deep reinforcement learning (RL), often with the goal of making RL methods suitable for real-world problems, and includes an outstanding paper award at NeurIPS.

Aravind Rajeswaran (FAIR)
Wenxuan Zhou (CMU)
George Tucker (Google Brain)
Doina Precup (McGill University / Mila / DeepMind Montreal)
Aviral Kumar (UC Berkeley)

More from the Same Authors