`

Timezone: »

 
Workshop
2nd Workshop on Offline Reinforcement Learning
Rishabh Agarwal · Aviral Kumar · George Tucker · Justin Fu · Nan Jiang · Doina Precup

Tue Dec 14 06:00 AM -- 03:20 PM (PST) @ None
Event URL: https://offline-rl-neurips.github.io/2021 »

Offline reinforcement learning (RL) is a re-emerging area of study that aims to learn behaviors using only logged data, such as data from previous experiments or human demonstrations, without further environment interaction. It has the potential to make tremendous progress in a number of real-world decision-making problems where active data collection is expensive (e.g., in robotics, drug discovery, dialogue generation, recommendation systems) or unsafe/dangerous (e.g., healthcare, autonomous driving, or education). Such a paradigm promises to resolve a key challenge to bringing reinforcement learning algorithms out of constrained lab settings to the real world. The first edition of the offline RL workshop, held at NeurIPS 2020, focused on and led to algorithmic development in offline RL. This year we propose to shift the focus from algorithm design to bridging the gap between offline RL research and real-world offline RL. Our aim is to create a space for discussion between researchers and practitioners on topics of importance for enabling offline RL methods in the real world. To that end, we have revised the topics and themes of the workshop, invited new speakers working on application-focused areas, and building on the lively panel discussion last year, we have invited the panelists from last year to participate in a retrospective panel on their changing perspectives.


For details on submission please visit: https://offline-rl-neurips.github.io/2021 (Submission deadline: October 6, Anywhere on Earth)

Speakers:
Aviv Tamar (Technion - Israel Inst. of Technology)
Angela Schoellig (University of Toronto)
Barbara Engelhardt (Princeton University)
Sham Kakade (University of Washington/Microsoft)
Minmin Chen (Google)
Philip S. Thomas (UMass Amherst)

Tue 6:00 a.m. - 6:10 a.m.
Opening Remarks
Rishabh Agarwal
Tue 6:10 a.m. - 6:40 a.m.
Offline Bayesian RL (Talk)
Aviv Tamar
Tue 6:40 a.m. - 6:40 a.m.
Q&A for Aviv Tamar (Q&A)
Aviv Tamar
Tue 6:45 a.m. - 7:15 a.m.
Contributed Talks (x 3) (Contributed Talks)
Tue 7:15 a.m. - 8:15 a.m.
Poster Session 1 (Poster Session)
Tue 8:15 a.m. - 8:16 a.m.
Speaker Intro (Speaker Introduction)
Rishabh Agarwal
Tue 8:16 a.m. - 8:46 a.m.
Offline RL for Robotics (Talk)
Angela Schoellig
Tue 8:46 a.m. - 8:51 a.m.
Q&A for Angela Schoellig (Q&A)
Tue 8:51 a.m. - 8:52 a.m.
Speaker Intro (Live short intro)
Rishabh Agarwal
Tue 8:52 a.m. - 9:22 a.m.
Generalization theory in Offline RL (Talk)
Sham Kakade
Tue 9:22 a.m. - 9:27 a.m.
Q&A for Sham Kakade (Q&A)
Sham Kakade
Tue 9:30 a.m. - 10:30 a.m.
Retrospective Panel (Discussion Panel)
Sergey Levine, Nando de Freitas, Emma Brunskill, Finale Doshi-Velez, Nan Jiang, Rishabh Agarwal
Tue 10:30 a.m. - 11:30 a.m.
Invited Speaker Panel (Discussion Panel)
Sham Kakade, Minmin Chen, Philip Thomas, Angela Schoellig, Barbara Engelhardt, Doina Precup, George Tucker
Tue 11:30 a.m. - 12:00 p.m.
Break
Tue 12:00 p.m. - 12:01 p.m.
Speaker Intro
Aviral Kumar, George Tucker
Tue 12:01 p.m. - 12:31 p.m.
Offline RL for recommendation systems (Talk)
Minmin Chen
Tue 12:31 p.m. - 12:36 p.m.
Q&A for Minmin Chen (Q&A)
Minmin Chen
Tue 12:36 p.m. - 1:06 p.m.
Contributed Talks (x 3) (Talks)
George Tucker, Aviral Kumar
Tue 1:06 p.m. - 1:07 p.m.
Speaker Intro
Aviral Kumar, George Tucker
Tue 1:07 p.m. - 1:37 p.m.
Offline RL for Clinical Decision Making (Talk)
Barbara Engelhardt
Tue 1:37 p.m. - 1:42 p.m.
Q&A for Barbara Engelhardt (Q&A)
Tue 1:43 p.m. - 2:13 p.m.
Model selection} in offline RL (Talk)
Philip Thomas
Tue 2:13 p.m. - 2:19 p.m.
Q&A for Philip Thomas (Q&A)
Philip Thomas
Tue 2:19 p.m. - 2:20 p.m.
Closing Remarks & Poster Session (Closing Remarks)
Tue 2:20 p.m. - 3:20 p.m.
Poster Session 2 (Poster Session)

Author Information

Rishabh Agarwal (Google Research, Brain Team)

I am a researcher in the Google Brain team in Montréal. My research interests mainly revolve around Deep Reinforcement Learning (RL), often with the goal of making RL methods suitable for real-world problems.

Aviral Kumar (UC Berkeley)
George Tucker (Google Brain)
Justin Fu (UC Berkeley)
Nan Jiang (University of Illinois at Urbana-Champaign)
Doina Precup (McGill University / Mila / DeepMind Montreal)

More from the Same Authors