( events)   Timezone: »  
Workshop
Fri Dec 09 08:25 AM -- 05:35 PM (PST) @ Virtual None
Deep Reinforcement Learning Workshop
Karol Hausman · Qi Zhang · Matthew Taylor · Martha White · Suraj Nair · Manan Tomar · Risto Vuorio · Ted Xiao · Zeyu Zheng · Manan Tomar





In recent years, the use of deep neural networks as function approximators has enabled researchers to extend reinforcement learning techniques to solve increasingly complex control tasks. The emerging field of deep reinforcement learning has led to remarkable empirical results in rich and varied domains like robotics, strategy games, and multi-agent interactions. This workshop will bring together researchers working at the intersection of deep learning and reinforcement learning, and it will help interested researchers outside of the field gain a high-level view about the current state of the art and potential directions for future contributions.

Opening Remarks
Tobias Gerstenberg (Invited Talk)
ESCHER: ESCHEWING IMPORTANCE SAMPLING IN GAMES BY COMPUTING A HISTORY VALUE FUNCTION TO ESTIMATE REGRET (Poster)
Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training (Poster)
Is Model Ensemble Necessary? Model-based RL via a Single Model with Lipschitz Regularized Value Function (Poster)
Offline Q-learning on Diverse Multi-Task Data Both Scales And Generalizes (Poster)
Jakob Foerster (Invited Talk)
Scientific Experiments in Reinforcement Learning (Opinion Talk)
Transformers are Sample-Efficient World Models (Poster)
Scaling Laws for a Multi-Agent Reinforcement Learning Model (Poster)
Natasha Jaques (Opinion Talk)
The World is not Uniformly Distributed; Important Implications for Deep RL (Opinion Talk)
Amy Zhang (Invited Talk)
Igor Mordatch (Invited Talk)
John Schulman (Implementation Talk)
Danijar Hafner (Implementation Talk)
Kristian Hartikainen (Implementation Talk)
Ilya Kostrikov, Aviral Kumar (Implementation Talk)
Panel Discussion
Closing Remarks
Novel Policy Seeking with Constrained Optimization (Poster)
Graph Q-Learning for Combinatorial Optimization (Poster)
Generalizable Point Cloud Reinforcement Learning for Sim-to-Real Dexterous Manipulation (Poster)
Dynamic Collaborative Multi-Agent Reinforcement Learning Communication for Autonomous Drone Reforestation (Poster)
Efficient Multi-Task Reinforcement Learning via Selective Behavior Sharing (Poster)
Look Back When Surprised: Stabilizing Reverse Experience Replay for Neural Approximation (Poster)
Improving Assistive Robotics with Deep Reinforcement Learning (Poster)
The Emphatic Approach to Average-Reward Policy Evaluation (Poster)
Learning Representations for Reinforcement Learning with Hierarchical Forward Models (Poster)
Temporal Disentanglement of Representations for Improved Generalisation in Reinforcement Learning (Poster)
Design Process is a Reinforcement Learning Problem (Poster)
Momentum Boosted Episodic Memory for Improving Learning in Long-Tailed RL Environments (Poster)
PD-MORL: Preference-Driven Multi-Objective Reinforcement Learning Algorithm (Poster)
Return Augmentation gives Supervised RL Temporal Compositionality (Poster)
Towards True Lossless Sparse Communication in Multi-Agent Systems (Poster)
Language Models Can Teach Themselves to Program Better (Poster)
The Surprising Effectiveness of Latent World Models for Continual Reinforcement Learning (Poster)
AsymQ: Asymmetric Q-loss to mitigate overestimation bias in off-policy reinforcement learning (Poster)
Multi-skill Mobile Manipulation for Object Rearrangement (Poster)
Curiosity in Hindsight (Poster)
Efficient Deep Reinforcement Learning Requires Regulating Statistical Overfitting (Poster)
Perturbed Quantile Regression for Distributional Reinforcement Learning (Poster)
SoftTreeMax: Policy Gradient with Tree Search (Poster)
Policy Architectures for Compositional Generalization in Control (Poster)
CLUTR: Curriculum Learning via Unsupervised Task Representation Learning (Poster)
A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning (Poster)
Simple Emergent Action Representations from Multi-Task Policy Training (Poster)
Integrating Episodic and Global Bonuses for Efficient Exploration (Poster)
Building a Subspace of Policies for Scalable Continual Learning (Poster)
Efficient Offline Policy Optimization with a Learned Model (Poster)
Efficient Multi-Horizon Learning for Off-Policy Reinforcement Learning (Poster)
Deep Learning of Intrinsically Motivated Options in the Arcade Learning Environment (Poster)
SEM2: Enhance Sample Efficiency and Robustness of End-to-end Urban Autonomous Driving via Semantic Masked World Model (Poster)
Learning a Domain-Agnostic Policy through Adversarial Representation Matching for Cross-Domain Policy Transfer (Poster)
Sample-efficient Adversarial Imitation Learning (Poster)
Adversarial Cheap Talk (Poster)
Ensemble based uncertainty estimation with overlapping alternative predictions (Poster)
Evaluating Long-Term Memory in 3D Mazes (Poster)
On All-Action Policy Gradients (Poster)
Imitating Human Behaviour with Diffusion Models (Poster)
Investigating Multi-task Pretraining and Generalization in Reinforcement Learning (Poster)
Proto-Value Networks: Scaling Representation Learning with Auxiliary Tasks (Poster)
Automated Dynamics Curriculums for Deep Reinforcement Learning (Poster)
Neural All-Pairs Shortest Path for Reinforcement Learning (Poster)
Quantization-aware Policy Distillation (QPD) (Poster)
A Connection between One-Step Regularization and Critic Regularization in Reinforcement Learning (Poster)
Transformer-based World Models Are Happy With 100k Interactions (Poster)
Human-AI Coordination via Human-Regularized Search and Learning (Poster)
PnP-Nav: Plug-and-Play Policies for Generalizable Visual Navigation Across Robots (Poster)
Foundation Models for History Compression in Reinforcement Learning (Poster)
Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier (Poster)
Off-policy Reinforcement Learning with Optimistic Exploration and Distribution Correction (Poster)
Noisy Symbolic Abstractions for Deep RL: A case study with Reward Machines (Poster)
Bayesian Q-learning With Imperfect Expert Demonstrations (Poster)
Adversarial Policies Beat Professional-Level Go AIs (Poster)
Supervised Q-Learning for Continuous Control (Poster)
Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective (Poster)
Inducing Functions through Reinforcement Learning without Task Specification (Poster)
Learning Successor Feature Representations to Train Robust Policies for Multi-task Learning (Poster)
Prioritizing Samples in Reinforcement Learning with Reducible Loss (Poster)
Biological Neurons vs Deep Reinforcement Learning: Sample efficiency in a simulated game-world (Poster)
Robust Option Learning for Adversarial Generalization (Poster)
Variance Reduction in Off-Policy Deep Reinforcement Learning using Spectral Normalization (Poster)
Selectively Sharing Experiences Improves Multi-Agent Reinforcement Learning (Poster)
Fast and Precise: Adjusting Planning Horizon with Adaptive Subgoal Search (Poster)
Reinforcement Learning in System Identification (Poster)
Imitation from Observation With Bootstrapped Contrastive Learning (Poster)
Hypernetwork-PPO for Continual Reinforcement Learning (Poster)
BLaDE: Robust Exploration via Diffusion Models (Poster)
Converging to Unexploitable Policies in Continuous Control Adversarial Games (Poster)
Model and Method: Training-Time Attack for Cooperative Multi-Agent Reinforcement Learning (Poster)
The Benefits of Model-Based Generalization in Reinforcement Learning (Poster)
DRL-EPANET: Deep reinforcement learning for optimal control at scale in Water Distribution Systems (Poster)
Understanding Hindsight Goal Relabeling Requires Rethinking Divergence Minimization (Poster)
Emergent collective intelligence from massive-agent cooperation and competition (Poster)
Constrained Imitation Q-learning with Earth Mover’s Distance reward (Poster)
Concept-based Understanding of Emergent Multi-Agent Behavior (Poster)
Do As You Teach: A Multi-Teacher Approach to Self-Play in Deep Reinforcement Learning (Poster)
CASA: Bridging the Gap between Policy Improvement and Policy Evaluation with Conflict Averse Policy Iteration (Poster)
Giving Robots a Hand: Broadening Generalization via Hand-Centric Human Video Demonstrations (Poster)
Actor Prioritized Experience Replay (Poster)
Multi-Source Transfer Learning for Deep Model-Based Reinforcement Learning (Poster)
Visual Imitation Learning with Patch Rewards (Poster)
Pink Noise Is All You Need: Colored Noise Exploration in Deep Reinforcement Learning (Poster)
Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement (Poster)
Implicit Offline Reinforcement Learning via Supervised Learning (Poster)
Hyperbolic Deep Reinforcement Learning (Poster)
Choreographer: Learning and Adapting Skills in Imagination (Poster)
Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning (Poster)
Graph Inverse Reinforcement Learning from Diverse Videos (Poster)
A study of natural robustness of deep reinforcement learning algorithms towards adversarial perturbations (Poster)
Visual Reinforcement Learning with Self-Supervised 3D Representations (Poster)
Time-Myopic Go-Explore: Learning A State Representation for the Go-Explore Paradigm (Poster)
Offline Reinforcement Learning for Customizable Visual Navigation (Poster)
Contrastive Example-Based Control (Poster)
Cyclophobic Reinforcement Learning (Poster)
In-context Reinforcement Learning with Algorithm Distillation (Poster)
Rethinking Learning Dynamics in RL using Adversarial Networks (Poster)
Fine-tuning Offline Policies with Optimistic Action Selection (Poster)
On the Feasibility of Cross-Task Transfer with Model-Based Reinforcement Learning (Poster)
MOPA: a Minimalist Off-Policy Approach to Safe-RL (Poster)
Fantastic Rewards and How to Tame Them: A Case Study on Reward Learning for Task-Oriented Dialogue Systems (Poster)
MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations (Poster)
In the ZONE: Measuring difficulty and progression in curriculum generation (Poster)
Scaling up and Stabilizing Differentiable Planning with Implicit Differentiation (Poster)
Multi-Agent Policy Transfer via Task Relationship Modeling (Poster)
EUCLID: Towards Efficient Unsupervised Reinforcement Learning with Multi-choice Dynamics Model (Poster)
Domain Invariant Q-Learning for model-free robust continuous control under visual distractions (Poster)
On The Fragility of Learned Reward Functions (Poster)
ABC: Adversarial Behavioral Cloning for Offline Mode-Seeking Imitation Learning (Poster)
Temporary Goals for Exploration (Poster)
Policy Aware Model Learning via Transition Occupancy Matching (Poster)
Training Equilibria in Reinforcement Learning (Poster)
Revisiting Bellman Errors for Offline Model Selection (Poster)
Deconfounded Imitation Learning (Poster)
Unleashing The Potential of Data Sharing in Ensemble Deep Reinforcement Learning (Poster)
Lagrangian Model Based Reinforcement Learning (Poster)
What Makes Certain Pre-Trained Visual Representations Better for Robotic Learning? (Poster)
A Ranking Game for Imitation Learning (Poster)
Aggressive Q-Learning with Ensembles: Achieving Both High Sample Efficiency and High Asymptotic Performance (Poster)
A Framework for Predictable Actor-Critic Control (Poster)
Informative rewards and generalization in curriculum learning (Poster)
Scaling Covariance Matrix Adaptation MAP-Annealing to High-Dimensional Controllers (Poster)
Learning Dexterous Manipulation from Exemplar Object Trajectories and Pre-Grasps (Poster)
Towards A Unified Policy Abstraction Theory and Representation Learning Approach in Markov Decision Processes (Poster)
Train Offline, Test Online: A Real Robot Learning Benchmark (Poster)
Uncertainty-Driven Exploration for Generalization in Reinforcement Learning (Poster)
Feasible Adversarial Robust Reinforcement Learning for Underspecified Environments (Poster)
A Unified Approach to Reinforcement Learning, Quantal Response Equilibria, and Two-Player Zero-Sum Games (Poster)
ERL-Re$^2$: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation (Poster)
Co-Imitation: Learning Design and Behaviour by Imitation (Poster)
Training graph neural networks with policy gradients to perform tree search (Poster)
Learning Exploration Policies with View-based Intrinsic Rewards (Poster)
Value-based CTDE Methods in Symmetric Two-team Markov Game: from Cooperation to Team Competition (Poster)
Memory-Efficient Reinforcement Learning with Priority based on Surprise and On-policyness (Poster)
One-shot Visual Imitation via Attributed Waypoints and Demonstration Augmentation (Poster)
MAESTRO: Open-Ended Environment Design for Multi-Agent Reinforcement Learning (Poster)
VI2N: A Network for Planning Under Uncertainty based on Value of Information (Poster)
Skill Machines: Temporal Logic Composition in Reinforcement Learning (Poster)
VARIATIONAL REPARAMETRIZED POLICY LEARNING WITH DIFFERENTIABLE PHYSICS (Poster)
Confidence-Conditioned Value Functions for Offline Reinforcement Learning (Poster)
Variance Double-Down: The Small Batch Size Anomaly in Multistep Deep Reinforcement Learning (Poster)
Guiding Exploration Towards Impactful Actions (Poster)
Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning (Poster)
Replay Buffer With Local Forgetting for Adaptive Deep Model-Based Reinforcement Learning (Poster)
Abstract-to-Executable Trajectory Translation for One-Shot Task Generalization (Poster)
Analyzing the Sensitivity to Policy-Value Decoupling in Deep Reinforcement Learning Generalization (Poster)
Offline Reinforcement Learning on Real Robot with Realistic Data Sources (Poster)
Toward Effective Deep Reinforcement Learning for 3D Robotic Manipulation: End-to-End Learning from Multimodal Raw Sensory Data (Poster)
Rewarding Episodic Visitation Discrepancy for Exploration in Reinforcement Learning (Poster)
Guided Skill Learning and Abstraction for Long-Horizon Manipulation (Poster)
Compositional Task Generalization with Modular Successor Feature Approximators (Poster)
PCRL: Priority Convention Reinforcement Learning for Microscopically Sequencable Multi-agent Problems (Poster)
A Game-Theoretic Perspective of Generalization in Reinforcement Learning (Poster)
Efficient Exploration using Model-Based Quality-Diversity with Gradients (Poster)
SPRINT: Scalable Semantic Policy Pre-training via Language Instruction Relabeling (Poster)
Distributional deep Q-learning with CVaR regression (Poster)
Planning Immediate Landmarks of Targets for Model-Free Skill Transfer across Agents (Poster)
Offline Reinforcement Learning from Heteroskedastic Data Via Support Constraints (Poster)
Pre-Training for Robots: Leveraging Diverse Multitask Data via Offline Reinforcement Learning (Poster)
Locally Constrained Representations in Reinforcement Learning (Poster)
Learning Semantics-Aware Locomotion Skills from Human Demonstrations (Poster)
Distance-Sensitive Offline Reinforcement Learning (Poster)
Better state exploration using action sequence equivalence (Poster)
Contrastive Value Learning: Implicit Models for Simple Offline RL (Poster)
Pretraining the Vision Transformer using self-supervised methods for vision based Deep Reinforcement Learning (Poster)
Toward Causal-Aware RL: State-Wise Action-Refined Temporal Difference (Poster)