Timezone: »
In recent years, the use of deep neural networks as function approximators has enabled researchers to extend reinforcement learning techniques to solve increasingly complex control tasks. The emerging field of deep reinforcement learning has led to remarkable empirical results in rich and varied domains like robotics, strategy games, and multiagent interactions. This workshop will bring together researchers working at the intersection of deep learning and reinforcement learning, and it will help interested researchers outside of the field gain a high-level view about the current state of the art and potential directions for future contributions.
Fri 8:30 a.m. - 9:00 a.m.
|
Invited talk: PierreYves Oudeyer "Machines that invent their own problems: Towards open-ended learning of skills"
(Talk)
Video
|
Pierre-Yves Oudeyer |
Fri 9:00 a.m. - 9:15 a.m.
|
Contributed Talk: Learning Functionally Decomposed Hierarchies for Continuous Control Tasks with Path Planning
(Talk)
Video
|
Sammy Christen, Lukas Jendele, Emre Aksan, Otmar Hilliges |
Fri 9:15 a.m. - 9:30 a.m.
|
Contributed Talk: Maximum Reward Formulation In Reinforcement Learning
(Talk)
Video
|
Sai Krishna Gottipati, Yashaswi Pathak, Rohan Nuttall, Sahir ., Ravi Chunduru, Ahmed Touati, Sriram Ganapathi, Matthew Taylor , Sarath Chandar |
Fri 9:30 a.m. - 9:45 a.m.
|
Contributed Talk: Accelerating Reinforcement Learning with Learned Skill Priors
(Talk)
Video
|
Karl Pertsch, Youngwoon Lee, Joseph Lim |
Fri 9:45 a.m. - 10:00 a.m.
|
Contributed Talk: Asymmetric self-play for automatic goal discovery in robotic manipulation
(Talk)
Video
|
OpenAI Robotics, Matthias Plappert, Raul Sampedro, Tao Xu , Ilge Akkaya, Vineet Kosaraju, Peter Welinder, Ruben D'Sa, Arthur Petron, Henrique Ponde, Alex Paino, Hyeonwoo Noh Noh , Lilian Weng, Qiming Yuan, Casey Chu , Wojciech Zaremba
|
Fri 10:00 a.m. - 10:30 a.m.
|
Invited talk: Marc Bellemare "Autonomous navigation of stratospheric balloons using reinforcement learning"
(Talk)
|
Marc Bellemare |
Fri 10:30 a.m. - 11:00 a.m.
|
Break
|
|
Fri 11:00 a.m. - 11:30 a.m.
|
Invited talk: Peter Stone "Grounded Simulation Learning for Sim2Real with Connections to Off-Policy Reinforcement Learning"
(Talk)
»
Video
For autonomous robots to operate in the open, dynamically changing world, they will need to be able to learn a robust set of skills from relatively little experience. This talk introduces Grounded Simulation Learning as a way to bridge the so-called reality gap between simulators and the real world in order to enable transfer learning from simulation to a real robot. Grounded Simulation Learning has led to the fastest known stable walk on a widely used humanoid robot. Connections to theoretical advances in off-policy reinforcement learning will be highlighted. |
Peter Stone |
Fri 11:30 a.m. - 11:45 a.m.
|
Contributed Talk: Mirror Descent Policy Optimization
(Talk)
Video
|
Manan Tomar, Lior Shani, Yonathan Efroni, Mohammad Ghavamzadeh |
Fri 11:45 a.m. - 12:00 p.m.
|
Contributed Talk: Planning from Pixels using Inverse Dynamics Models
(Talk)
Video
|
Keiran Paster, Sheila McIlraith, Jimmy Ba |
Fri 12:00 p.m. - 12:30 p.m.
|
Invited talk: Matt Botvinick "Alchemy: A Benchmark Task Distribution for Meta-Reinforcement Learning Research"
(Talk)
Video
|
Matt Botvinick |
Fri 12:30 p.m. - 1:30 p.m.
|
Poster session 1 (Poster session) | |
Fri 1:30 p.m. - 2:00 p.m.
|
Invited talk: Susan Murphy "We used RL but…. Did it work?!"
(Talk)
»
Video
Digital Healthcare is a growing area of importance in modern healthcare due to its potential in helping individuals improve their behaviors so as to better manage chronic health challenges such as hypertension, mental health, cancer and so on. Digital apps and wearables, observe the user's state via sensors/self-report, deliver treatment actions (reminders, motivational messages, suggestions, social outreach,...) and observe rewards repeatedly on the user across time. This area is seeing increasing interest by RL researchers with the goal of including in the digital app/wearable an RL algorithm that "personalizes" the treatments to the user. But after RL is run on a number of users, how do we know whether the RL algorithm actually personalized the sequential treatments to the user? In this talk we report on our first efforts to address this question after our RL algorithm was deployed on each of 111 individuals with hypertension. |
Susan Murphy |
Fri 2:00 p.m. - 2:15 p.m.
|
Contributed Talk: MaxEnt RL and Robust Control
(Talk)
Video
|
Benjamin Eysenbach, Sergey Levine |
Fri 2:15 p.m. - 2:30 p.m.
|
Contributed Talk: Reset-Free Lifelong Learning with Skill-Space Planning
(Talk)
Video
|
Kevin Lu, Aditya Grover, Pieter Abbeel, Igor Mordatch |
Fri 2:30 p.m. - 3:00 p.m.
|
Invited talk: Anusha Nagabandi "Model-based Deep Reinforcement Learning for Robotic Systems"
(Talk)
»
Video
Deep learning has shown promising results in robotics, but we are still far from having intelligent systems that can operate in the unstructured settings of the real world, where disturbances, variations, and unobserved factors lead to a dynamic environment. In this talk, we'll see that model-based deep RL can indeed allow for efficient skill acquisition, as well as the ability to repurpose models to solve a variety of tasks. We'll scale up these approaches to enable locomotion with a 6-DoF legged robot on varying terrains in the real world, as well as dexterous manipulation with a 24-DoF anthropomorphic hand in the real world. We then focus on the inevitable mismatch between an agent's training conditions and the test conditions in which it may actually be deployed, thus illuminating the need for adaptive systems. Inspired by the ability of humans and animals to adapt quickly in the face of unexpected changes, we present a meta-learning algorithm within this model-based RL framework to enable online adaptation of large, high-capacity models using only small amounts of data from the new task. These fast adaptation capabilities are seen in both simulation and the real-world, with experiments such as a 6-legged robot adapting online to an unexpected payload or suddenly losing a leg. We will then further extend the capabilities of our robotic systems by enabling the agents to reason directly from raw image observations. Bridging the benefits of representation learning techniques with the adaptation capabilities of meta-RL, we'll present a unified framework for effective meta-RL from images. With robotic arms in the real world that learn peg insertion and ethernet cable insertion to varying targets, we'll see the fast acquisition of new skills, directly from raw image observations in the real world. Finally, this talk will conclude that model-based deep RL provides a framework for making sense of the world, thus allowing for reasoning and adaptation capabilities that are necessary for successful operation in the dynamic settings of the real world. |
Anusha Nagabandi |
Fri 3:00 p.m. - 3:30 p.m.
|
Break
|
|
Fri 3:30 p.m. - 4:00 p.m.
|
Invited talk: Ashley Edwards "Learning Offline from Observation"
(Talk)
»
Video
A common trope in sci-fi is to have a robot that can quickly solve some problem after watching a person, studying a video, or reading a book. While these settings are (currently) fictional, the benefits are real. Agents that can solve tasks by observing others have the potential to greatly reduce the burden of their human teachers, removing some of the need to hand-specify rewards or goals. In this talk, I consider the question of how an agent can not only learn by observing others, but also how it can learn quickly by training offline before taking any steps in the environment. First, I will describe an approach that trains a latent policy directly from state observations, which can then be quickly mapped to real actions in the agent’s environment. Then I will describe how we can train a novel value function, Q(s,s’), to learn off-policy from observations. Unlike previous imitation from observation approaches, this formulation goes beyond simply imitating and rather enables learning from potentially suboptimal observations. |
Ashley Edwards |
Fri 4:00 p.m. - 4:07 p.m.
|
NeurIPS RL Competitions: Flatland challenge
(Talk)
Video
|
Sharada Mohanty |
Fri 4:07 p.m. - 4:15 p.m.
|
NeurIPS RL Competitions: Learning to run a power network
(Talk)
Video
|
Antoine Marot |
Fri 4:15 p.m. - 4:22 p.m.
|
NeurIPS RL Competitions: Procgen challenge
(Talk)
|
Sharada Mohanty |
Fri 4:22 p.m. - 4:30 p.m.
|
NeurIPS RL Competitions: MineRL
(Talk)
Video
|
William Guss, Stephanie Milani |
Fri 4:30 p.m. - 5:00 p.m.
|
Invited talk: Karen Liu "Deep Reinforcement Learning for Physical Human-Robot Interaction"
(Talk)
»
Video
Creating realistic virtual humans has traditionally been considered a research problem in Computer Animation primarily for entertainment applications. With the recent breakthrough in collaborative robots and deep reinforcement learning, accurately modeling human movements and behaviors has become a common challenge also faced by researchers in robotics and artificial intelligence. For example, mobile robots and autonomous vehicles can benefit from training in environments populated with ambulating humans and learning to avoid colliding with them. Healthcare robotics, on the other hand, need to embrace physical contacts and learn to utilize them for enabling human’s activities of daily living. An immediate concern in developing such an autonomous and powered robotic device is the safety of human users during the early development phase when the control policies are still largely suboptimal. Learning from physically simulated humans and environments presents a promising alternative which enables robots to safely make and learn from mistakes without putting real people at risk. However, deploying such policies to interact with people in the real world adds additional complexity to the already challenging sim-to-real transfer problem. In this talk, I will present our current progress on solving the problem of sim-to-real transfer with humans in the environment, actively interacting with the robots through physical contacts. We tackle the problem from two fronts: developing more relevant human models to facilitate robot learning and developing human-aware robot perception and control policies. As an example of contextualizing our research effort, we develop a mobile manipulator to put clothes on people with physical impairments, enabling them to carry out day-to-day tasks and maintain independence. |
Karen Liu |
Fri 5:00 p.m. - 6:00 p.m.
|
Panel discussion
|
Pierre-Yves Oudeyer, Marc Bellemare, Peter Stone, Matt Botvinick, Susan Murphy, Anusha Nagabandi, Ashley Edwards, Karen Liu, Pieter Abbeel |
Fri 6:00 p.m. - 7:00 p.m.
|
Poster session 2 (Poster session) | |
-
|
Poster: Planning from Pixels using Inverse Dynamics Models
(Poster)
[ Video ]
Video
|
|
-
|
Poster: OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Maximum Reward Formulation In Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Reset-Free Lifelong Learning with Skill-Space Planning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Mirror Descent Policy Optimization
(Poster)
[ Video ]
Video
|
|
-
|
Poster: MaxEnt RL and Robust Control
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Learning Functionally Decomposed Hierarchies for Continuous Control Tasks with Path Planning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Provably Efficient Policy Optimization via Thompson Sampling
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Weighted Bellman Backups for Improved Signal-to-Noise in Q-Updates
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Efficient Competitive Self-Play Policy Optimization
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Asymmetric self-play for automatic goal discovery in robotic manipulation
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Correcting Momentum in Temporal Difference Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Decoupling Exploration and Exploitation in Meta-Reinforcement Learning without Sacrifices
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Diverse Exploration via InfoMax Options
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads
(Poster)
|
|
-
|
Poster: Parrot: Data-driven Behavioral Priors for Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: C-Learning: Horizon-Aware Cumulative Accessibility Estimation
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning
(Poster)
|
|
-
|
Poster: Data-Efficient Reinforcement Learning with Self-Predictive Representations
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Accelerating Reinforcement Learning with Learned Skill Priors
(Poster)
[ Video ]
Video
|
|
-
|
Poster: C-Learning: Learning to Achieve Goals via Recursive Classification
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Learning to Reach Goals via Iterated Supervised Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Unified View of Inference-based Off-policy RL: Decoupling Algorithmic and Implemental Source of Performance Gaps
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Learning to Sample with Local and Global Contexts in Experience Replay Buffer
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Adversarial Environment Generation for Learning to Navigate the Web
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments
(Poster)
[ Video ]
Video
|
|
-
|
Poster: DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Discovery of Options via Meta-Gradients
(Poster)
[ Video ]
Video
|
|
-
|
Poster: GRAC: Self-Guided and Self-Regularized Actor-Critic
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Harnessing Distribution Ratio Estimators for Learning Agents with Quality and Diversity
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Deep Bayesian Quadrature Policy Gradient
(Poster)
[ Video ]
Video
|
|
-
|
Poster: PixL2R: Guiding Reinforcement Learning Using Natural Language by Mapping Pixels to Rewards
(Poster)
[ Video ]
Video
|
|
-
|
Poster: A Policy Gradient Method for Task-Agnostic Exploration
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Skill Transfer via Partially Amortized Hierarchical Planning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: On Effective Parallelization of Monte Carlo Tree Search
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Mastering Atari with Discrete World Models
(Poster)
|
|
-
|
Poster: Average Reward Reinforcement Learning with Monotonic Policy Improvement
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Combating False Negatives in Adversarial Imitation Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Evaluating Agents Without Rewards
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Learning Latent Landmarks for Generalizable Planning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Conservative Safety Critics for Exploration
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Solving Compositional Reinforcement Learning Problems via Task Reduction
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Deep Q-Learning with Low Switching Cost
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Learning to Represent Action Values as a Hypergraph on the Action Vertices
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Addressing Distribution Shift in Online Reinforcement Learning with Offline Datasets
(Poster)
[ Video ]
Video
|
|
-
|
Poster: TACTO: A Simulator for Learning Control from Touch Sensing
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Safe Reinforcement Learning with Natural Language Constraints
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Shortest-Path Constrained Reinforcement Learning for Sparse Reward Tasks
(Poster)
[ Video ]
Video
|
|
-
|
Poster: An Examination of Preference-based Reinforcement Learning for Treatment Recommendation
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Model-based Navigation in Environments with Novel Layouts Using Abstract $n$-D Maps
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Online Safety Assurance for Deep Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Lyapunov Barrier Policy Optimization
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Evolving Reinforcement Learning Algorithms
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Chaining Behaviors from Data with Model-Free Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Pairwise Weights for Temporal Credit Assignment
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Causal Curiosity: RL Agents Discovering Self-supervised Experiments for Causal Representation Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Understanding Learned Reward Functions
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Addressing reward bias in Adversarial Imitation Learning with neutral reward functions
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Reinforcement Learning with Bayesian Classifiers: Efficient Skill Learning from Outcome Examples
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Decoupling Representation Learning from Reinforcement Learning
(Poster)
|
|
-
|
Poster: Model-Based Reinforcement Learning via Latent-Space Collocation
(Poster)
[ Video ]
Video
|
|
-
|
Poster: A Variational Inference Perspective on Goal-Directed Behavior in Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: SCC: an efficient deep reinforcement learning agent mastering the game of StarCraft II
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Predictive PER: Balancing Priority and Diversity towards Stable Deep Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Latent State Models for Meta-Reinforcement Learning from Images
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Dream and Search to Control: Latent Space Planning for Continuous Control
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Explanation Augmented Feedback in Human-in-the-Loop Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Goal-Conditioned Reinforcement Learning in the Presence of an Adversary
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Regularized Inverse Reinforcement Learning
(Poster)
Video
|
|
-
|
Poster: Domain Adversarial Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Safety Aware Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Sample Efficient Training in Multi-Agent AdversarialGames with Limited Teammate Communication
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Amortized Variational Deep Q Network
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Disentangled Planning and Control in Vision Based Robotics via Reward Machines
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Maximum Mutation Reinforcement Learning for Scalable Control
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Unsupervised Task Clustering for Multi-Task Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Learning Intrinsic Symbolic Rewards in Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Preventing Value Function Collapse in Ensemble Q-Learning by Maximizing Representation Diversity
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Action and Perception as Divergence Minimization
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Randomized Ensembled Double Q-Learning: Learning Fast Without a Model
(Poster)
[ Video ]
Video
|
|
-
|
Poster: D2RL: Deep Dense Architectures in Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Discovering Diverse Multi-Agent Strategic Behavior via Reward Randomization
(Poster)
[ Video ]
Video
|
|
-
|
Poster: What Matters for On-Policy Deep Actor-Critic Methods? A Large-Scale Study
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Semantic State Representation for Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Hyperparameter Auto-tuning in Self-Supervised Robotic Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Targeted Query-based Action-Space Adversarial Policies on Deep Reinforcement Learning Agents
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Abstract Value Iteration for Hierarchical Deep Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Compute- and Memory-Efficient Reinforcement Learning with Latent Experience Replay
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Emergent Road Rules In Multi-Agent Driving Environments
(Poster)
[ Video ]
Video
|
|
-
|
Poster: An Algorithmic Causal Model of Credit Assignment in Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Learning to Weight Imperfect Demonstrations
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Structure and randomness in planning and reinforcement learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Parameter-based Value Functions
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Influence-aware Memory for Deep Reinforcement Learning in POMDPs
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Modular Training, Integrated Planning Deep Reinforcement Learning for Mobile Robot Navigation
(Poster)
[ Video ]
Video
|
|
-
|
Poster: How to make Deep RL work in Practice
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Super-Human Performance in Gran Turismo Sport Using Deep Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Curriculum Learning through Distilled Discriminators
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Self-Supervised Policy Adaptation during Deployment
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Trust, but verify: model-based exploration in sparse reward environments
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Optimizing Traffic Bottleneck Throughput using Cooperative, Decentralized Autonomous Vehicles
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Tonic: A Deep Reinforcement Learning Library for Fast Prototyping and Benchmarking
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Revisiting Rainbow: Promoting more insightful and inclusive deep reinforcement learning research
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Reinforcement Learning with Latent Flow
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Autoregressive Dynamics Models for Offline Policy Evaluation and Optimization
(Poster)
[ Video ]
Video
|
|
-
|
Poster: AWAC: Accelerating Online Reinforcement Learning With Offline Datasets
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Inter-Level Cooperation in Hierarchical Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Towards Effective Context for Meta-Reinforcement Learning: an Approach based on Contrastive Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Multi-Agent Option Critic Architecture
(Poster)
|
|
-
|
Poster: Measuring Visual Generalization in Continuous Control from Pixels
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Policy Learning Using Weak Supervision
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Unsupervised Domain Adaptation for Visual Navigation
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Learning Markov State Abstractions for Deep Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Value Generalization among Policies: Improving Value Function with Policy Representation
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Energy-based Surprise Minimization for Multi-Agent Value Factorization
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Backtesting Optimal Trade Execution Policies in Agent-Based Market Simulator
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Successor Landmarks for Efficient Exploration and Long-Horizon Navigation
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Multi-task Reinforcement Learning with a Planning Quasi-Metric
(Poster)
[ Video ]
Video
|
|
-
|
Poster: R-LAtte: Visual Control via Deep Reinforcement Learning with Attention Network
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Quantifying Differences in Reward Functions
(Poster)
[ Video ]
Video
|
|
-
|
Poster: DERAIL: Diagnostic Environments for Reward And Imitation Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Exploring Zero-Shot Emergent Communication in Embodied Multi-Agent Populations
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Unlocking the Potential of Deep Counterfactual Value Networks
(Poster)
[ Video ]
Video
|
|
-
|
Poster: FactoredRL: Leveraging Factored Graphs for Deep Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Reusability and Transferability of Macro Actions for Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Interactive Visualization for Debugging RL
(Poster)
[ Video ]
Video
|
|
-
|
Poster: A Deep Value-based Policy Search Approach for Real-world Vehicle Repositioning on Mobility-on-Demand Platforms
(Poster)
[ Video ]
Video
|
|
-
|
Poster: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Visual Imitation with Reinforcement Learning using Recurrent Siamese Networks
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Learning Accurate Long-term Dynamics for Model-based Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: XLVIN: eXecuted Latent Value Iteration Nets
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Beyond Exponentially Discounted Sum: Automatic Learning of Return Function
(Poster)
[ Video ]
Video
|
|
-
|
Poster: XT2: Training an X-to-Text Typing Interface with Online Learning from Implicit Feedback
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Greedy Multi-Step Off-Policy Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Variational Empowerment as Representation Learning for Goal-Based Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Robust Domain Randomised Reinforcement Learning through Peer-to-Peer Distillation
(Poster)
[ Video ]
Video
|
|
-
|
Poster: ReaPER: Improving Sample Efficiency in Model-Based Latent Imagination
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Model-Based Reinforcement Learning: A Compressed Survey
(Poster)
[ Video ]
Video
|
|
-
|
Poster: BeBold: Exploration Beyond the Boundary of Explored Regions
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Model-Based Visual Planning with Self-Supervised Functional Distances
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Utilizing Skipped Frames in Action Repeats via Pseudo-Actions
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Bringing order into Actor-Critic Algorithms usingStackelberg Games
(Poster)
|
|
-
|
Poster: Continual Model-Based Reinforcement Learning withHypernetworks
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Online Hyper-parameter Tuning in Off-policy Learning via Evolutionary Strategies
(Poster)
[ Video ]
Video
|
|
-
|
Poster: Policy Guided Planning in Learned Latent Space
(Poster)
[ Video ]
Video
|
|
-
|
Poster: PettingZoo: Gym for Multi-Agent Reinforcement Learning
(Poster)
[ Video ]
Video
|
|
-
|
Poster: DREAM: Deep Regret minimization with Advantage baselines and Model-free learning
(Poster)
[ Video ]
Video
|
Author Information
Pieter Abbeel (UC Berkeley & Covariant)
Pieter Abbeel is Professor and Director of the Robot Learning Lab at UC Berkeley [2008- ], Co-Director of the Berkeley AI Research (BAIR) Lab, Co-Founder of covariant.ai [2017- ], Co-Founder of Gradescope [2014- ], Advisor to OpenAI, Founding Faculty Partner AI@TheHouse venture fund, Advisor to many AI/Robotics start-ups. He works in machine learning and robotics. In particular his research focuses on making robots learn from people (apprenticeship learning), how to make robots learn through their own trial and error (reinforcement learning), and how to speed up skill acquisition through learning-to-learn (meta-learning). His robots have learned advanced helicopter aerobatics, knot-tying, basic assembly, organizing laundry, locomotion, and vision-based robotic manipulation. He has won numerous awards, including best paper awards at ICML, NIPS and ICRA, early career awards from NSF, Darpa, ONR, AFOSR, Sloan, TR35, IEEE, and the Presidential Early Career Award for Scientists and Engineers (PECASE). Pieter's work is frequently featured in the popular press, including New York Times, BBC, Bloomberg, Wall Street Journal, Wired, Forbes, Tech Review, NPR.
Chelsea Finn (Stanford)
Joelle Pineau (McGill University)
Joelle Pineau is an Associate Professor and William Dawson Scholar at McGill University where she co-directs the Reasoning and Learning Lab. She also leads the Facebook AI Research lab in Montreal, Canada. She holds a BASc in Engineering from the University of Waterloo, and an MSc and PhD in Robotics from Carnegie Mellon University. Dr. Pineau's research focuses on developing new models and algorithms for planning and learning in complex partially-observable domains. She also works on applying these algorithms to complex problems in robotics, health care, games and conversational agents. She serves on the editorial board of the Journal of Artificial Intelligence Research and the Journal of Machine Learning Research and is currently President of the International Machine Learning Society. She is a recipient of NSERC's E.W.R. Steacie Memorial Fellowship (2018), a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR) and in 2016 was named a member of the College of New Scholars, Artists and Scientists by the Royal Society of Canada.
David Silver (DeepMind)
Satinder Singh (University of Michigan)
Coline Devin (DeepMind)
Misha Laskin (UC Berkeley)
Kimin Lee (UC Berkeley)
Janarthanan Rajendran (University of Michigan)
Vivek Veeriah (University of Michigan)
More from the Same Authors
-
2020 Workshop: ML Retrospectives, Surveys & Meta-Analyses (ML-RSA) »
Chhavi Yadav · Prabhu Pradhan · Jesse Dodge · Mayoore Jaiswal · Peter Henderson · Abhishek Gupta · Ryan Lowe · Jessica Forde Jessica Forde · Joelle Pineau -
2020 Poster: Weakly-Supervised Reinforcement Learning for Controllable Behavior »
Lisa Lee · Ben Eysenbach · Russ Salakhutdinov · Shixiang (Shane) Gu · Chelsea Finn -
2020 Poster: Denoising Diffusion Probabilistic Models »
Jonathan Ho · Ajay Jain · Pieter Abbeel -
2020 Poster: Adversarial Soft Advantage Fitting: Imitation Learning without Policy Optimization »
Paul Barde · Julien Roy · Wonseok Jeon · Joelle Pineau · Chris Pal · Derek Nowrouzezahrai -
2020 Poster: Automatic Curriculum Learning through Value Disagreement »
Yunzhi Zhang · Pieter Abbeel · Lerrel Pinto -
2020 Poster: Continual Learning of Control Primitives : Skill Discovery via Reset-Games »
Kelvin Xu · Siddharth Verma · Chelsea Finn · Sergey Levine -
2020 Poster: Meta-Learning Requires Meta-Augmentation »
Janarthanan Rajendran · Alexander Irpan · Eric Jang -
2020 Spotlight: Adversarial Soft Advantage Fitting: Imitation Learning without Policy Optimization »
Paul Barde · Julien Roy · Wonseok Jeon · Joelle Pineau · Chris Pal · Derek Nowrouzezahrai -
2020 Poster: Gradient Surgery for Multi-Task Learning »
Tianhe Yu · Saurabh Kumar · Abhishek Gupta · Sergey Levine · Karol Hausman · Chelsea Finn -
2020 Poster: AvE: Assistance via Empowerment »
Yuqing Du · Stas Tiomkin · Emre Kiciman · Daniel Polani · Pieter Abbeel · Anca Dragan -
2020 Poster: Continuous Meta-Learning without Tasks »
James Harrison · Apoorva Sharma · Chelsea Finn · Marco Pavone -
2020 Poster: Reinforcement Learning with Augmented Data »
Misha Laskin · Kimin Lee · Adam Stooke · Lerrel Pinto · Pieter Abbeel · Aravind Srinivas -
2020 Poster: Generalized Hindsight for Reinforcement Learning »
Alexander Li · Lerrel Pinto · Pieter Abbeel -
2020 Poster: Trajectory-wise Multiple Choice Learning for Dynamics Generalization in Reinforcement Learning »
Younggyo Seo · Kimin Lee · Ignasi Clavera Gilaberte · Thanard Kurutach · Jinwoo Shin · Pieter Abbeel -
2020 Spotlight: Reinforcement Learning with Augmented Data »
Misha Laskin · Kimin Lee · Adam Stooke · Lerrel Pinto · Pieter Abbeel · Aravind Srinivas -
2020 Poster: Learning Retrospective Knowledge with Reverse Reinforcement Learning »
Shangtong Zhang · Vivek Veeriah · Shimon Whiteson -
2020 Poster: Sparse Graphical Memory for Robust Planning »
Scott Emmons · Ajay Jain · Misha Laskin · Thanard Kurutach · Pieter Abbeel · Deepak Pathak -
2020 Poster: One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL »
Saurabh Kumar · Aviral Kumar · Sergey Levine · Chelsea Finn -
2020 Poster: A Self-Tuning Actor-Critic Algorithm »
Tom Zahavy · Zhongwen Xu · Vivek Veeriah · Matteo Hessel · Junhyuk Oh · Hado van Hasselt · David Silver · Satinder Singh -
2020 Poster: Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors »
Karl Pertsch · Oleh Rybkin · Frederik Ebert · Shenghao Zhou · Dinesh Jayaraman · Chelsea Finn · Sergey Levine -
2020 Poster: Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model »
Alex X. Lee · Anusha Nagabandi · Pieter Abbeel · Sergey Levine -
2020 Poster: MOPO: Model-based Offline Policy Optimization »
Tianhe Yu · Garrett Thomas · Lantao Yu · Stefano Ermon · James Zou · Sergey Levine · Chelsea Finn · Tengyu Ma -
2020 Poster: Novelty Search in Representational Space for Sample Efficient Exploration »
Ruo Yu Tao · Vincent Francois-Lavet · Joelle Pineau -
2020 Oral: Novelty Search in Representational Space for Sample Efficient Exploration »
Ruo Yu Tao · Vincent Francois-Lavet · Joelle Pineau -
2019 Workshop: Deep Reinforcement Learning »
Pieter Abbeel · Chelsea Finn · Joelle Pineau · David Silver · Satinder Singh · Joshua Achiam · Carlos Florensa · Christopher Grimm · Haoran Tang · Vivek Veeriah -
2019 Workshop: Learning with Rich Experience: Integration of Learning Paradigms »
Zhiting Hu · Andrew Wilson · Chelsea Finn · Lisa Lee · Taylor Berg-Kirkpatrick · Ruslan Salakhutdinov · Eric Xing -
2019 Workshop: Retrospectives: A Venue for Self-Reflection in ML Research »
Ryan Lowe · Yoshua Bengio · Joelle Pineau · Michela Paganini · Jessica Forde · Shagun Sodhani · Abhishek Gupta · Joel Lehman · Peter Henderson · Kanika Madan · Koustuv Sinha · Xavier Bouthillier -
2019 Poster: Evaluating Protein Transfer Learning with TAPE »
Roshan Rao · Nicholas Bhattacharya · Neil Thomas · Yan Duan · Peter Chen · John Canny · Pieter Abbeel · Yun Song -
2019 Spotlight: Evaluating Protein Transfer Learning with TAPE »
Roshan Rao · Nicholas Bhattacharya · Neil Thomas · Yan Duan · Peter Chen · John Canny · Pieter Abbeel · Yun Song -
2019 Poster: Goal-conditioned Imitation Learning »
Yiming Ding · Carlos Florensa · Pieter Abbeel · Mariano Phielipp -
2019 Poster: Geometry-Aware Neural Rendering »
Joshua Tobin · Wojciech Zaremba · Pieter Abbeel -
2019 Poster: MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies »
Xue Bin Peng · Michael Chang · Grace Zhang · Pieter Abbeel · Sergey Levine -
2019 Poster: Discovery of Useful Questions as Auxiliary Tasks »
Vivek Veeriah · Matteo Hessel · Zhongwen Xu · Janarthanan Rajendran · Richard L Lewis · Junhyuk Oh · Hado van Hasselt · David Silver · Satinder Singh -
2019 Oral: Geometry-Aware Neural Rendering »
Joshua Tobin · Wojciech Zaremba · Pieter Abbeel -
2019 Poster: No-Press Diplomacy: Modeling Multi-Agent Gameplay »
Philip Paquette · Yuchen Lu · SETON STEVEN BOCCO · Max Smith · Satya O.-G. · Jonathan K. Kummerfeld · Joelle Pineau · Satinder Singh · Aaron Courville -
2019 Poster: Compositional Plan Vectors »
Coline Devin · Daniel Geng · Pieter Abbeel · Trevor Darrell · Sergey Levine -
2019 Poster: On the Utility of Learning about Humans for Human-AI Coordination »
Micah Carroll · Rohin Shah · Mark Ho · Tom Griffiths · Sanjit Seshia · Pieter Abbeel · Anca Dragan -
2019 Poster: Compression with Flows via Local Bits-Back Coding »
Jonathan Ho · Evan Lohn · Pieter Abbeel -
2019 Poster: Guided Meta-Policy Search »
Russell Mendonca · Abhishek Gupta · Rosen Kralev · Pieter Abbeel · Sergey Levine · Chelsea Finn -
2019 Spotlight: Compression with Flows via Local Bits-Back Coding »
Jonathan Ho · Evan Lohn · Pieter Abbeel -
2019 Spotlight: Guided Meta-Policy Search »
Russell Mendonca · Abhishek Gupta · Rosen Kralev · Pieter Abbeel · Sergey Levine · Chelsea Finn -
2018 Workshop: Deep Reinforcement Learning »
Pieter Abbeel · David Silver · Satinder Singh · Joelle Pineau · Joshua Achiam · Rein Houthooft · Aravind Srinivas -
2018 Poster: Temporal Regularization for Markov Decision Process »
Pierre Thodoroff · Audrey Durand · Joelle Pineau · Doina Precup -
2018 Poster: On Learning Intrinsic Rewards for Policy Gradient Methods »
Zeyu Zheng · Junhyuk Oh · Satinder Singh -
2018 Poster: Meta-Reinforcement Learning of Structured Exploration Strategies »
Abhishek Gupta · Russell Mendonca · YuXuan Liu · Pieter Abbeel · Sergey Levine -
2018 Poster: Learning Plannable Representations with Causal InfoGAN »
Thanard Kurutach · Aviv Tamar · Ge Yang · Stuart Russell · Pieter Abbeel -
2018 Spotlight: Meta-Reinforcement Learning of Structured Exploration Strategies »
Abhishek Gupta · Russell Mendonca · YuXuan Liu · Pieter Abbeel · Sergey Levine -
2018 Invited Talk (Posner Lecture): Reproducible, Reusable, and Robust Reinforcement Learning »
Joelle Pineau -
2018 Poster: Completing State Representations using Spectral Learning »
Nan Jiang · Alex Kulesza · Satinder Singh -
2018 Poster: Evolved Policy Gradients »
Rein Houthooft · Yuhua Chen · Phillip Isola · Bradly Stadie · Filip Wolski · OpenAI Jonathan Ho · Pieter Abbeel -
2018 Spotlight: Evolved Policy Gradients »
Rein Houthooft · Yuhua Chen · Phillip Isola · Bradly Stadie · Filip Wolski · OpenAI Jonathan Ho · Pieter Abbeel -
2018 Poster: The Importance of Sampling inMeta-Reinforcement Learning »
Bradly Stadie · Ge Yang · Rein Houthooft · Peter Chen · Yan Duan · Yuhuai Wu · Pieter Abbeel · Ilya Sutskever -
2017 Symposium: Deep Reinforcement Learning »
Pieter Abbeel · Yan Duan · David Silver · Satinder Singh · Junhyuk Oh · Rein Houthooft -
2017 Poster: Repeated Inverse Reinforcement Learning »
Kareem Amin · Nan Jiang · Satinder Singh -
2017 Poster: Natural Value Approximators: Learning when to Trust Past Estimates »
Zhongwen Xu · Joseph Modayil · Hado van Hasselt · Andre Barreto · David Silver · Tom Schaul -
2017 Poster: #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning »
Haoran Tang · Rein Houthooft · Davis Foote · Adam Stooke · OpenAI Xi Chen · Yan Duan · John Schulman · Filip DeTurck · Pieter Abbeel -
2017 Poster: Successor Features for Transfer in Reinforcement Learning »
Andre Barreto · Will Dabney · Remi Munos · Jonathan Hunt · Tom Schaul · David Silver · Hado van Hasselt -
2017 Poster: A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning »
Marc Lanctot · Vinicius Zambaldi · Audrunas Gruslys · Angeliki Lazaridou · Karl Tuyls · Julien Perolat · David Silver · Thore Graepel -
2017 Poster: Imagination-Augmented Agents for Deep Reinforcement Learning »
Sébastien Racanière · Theophane Weber · David Reichert · Lars Buesing · Arthur Guez · Danilo Jimenez Rezende · Adrià Puigdomènech Badia · Oriol Vinyals · Nicolas Heess · Yujia Li · Razvan Pascanu · Peter Battaglia · Demis Hassabis · David Silver · Daan Wierstra -
2017 Poster: Inverse Reward Design »
Dylan Hadfield-Menell · Smitha Milli · Pieter Abbeel · Stuart J Russell · Anca Dragan -
2017 Spotlight: Successor Features for Transfer in Reinforcement Learning »
Andre Barreto · Will Dabney · Remi Munos · Jonathan Hunt · Tom Schaul · David Silver · Hado van Hasselt -
2017 Spotlight: Natural Value Approximators: Learning when to Trust Past Estimates »
Zhongwen Xu · Joseph Modayil · Hado van Hasselt · Andre Barreto · David Silver · Tom Schaul -
2017 Spotlight: Repeated Inverse Reinforcement Learning »
Kareem Amin · Nan Jiang · Satinder Singh -
2017 Oral: Inverse Reward Design »
Dylan Hadfield-Menell · Smitha Milli · Pieter Abbeel · Stuart J Russell · Anca Dragan -
2017 Oral: Imagination-Augmented Agents for Deep Reinforcement Learning »
Sébastien Racanière · Theophane Weber · David Reichert · Lars Buesing · Arthur Guez · Danilo Jimenez Rezende · Adrià Puigdomènech Badia · Oriol Vinyals · Nicolas Heess · Yujia Li · Razvan Pascanu · Peter Battaglia · Demis Hassabis · David Silver · Daan Wierstra -
2017 Invited Talk: Deep Learning for Robotics »
Pieter Abbeel -
2017 Demonstration: A Deep Reinforcement Learning Chatbot »
Iulian Vlad Serban · Chinnadhurai Sankar · Mathieu Germain · Saizheng Zhang · Zhouhan Lin · Sandeep Subramanian · Taesup Kim · Michael Pieper · Sarath Chandar Anbil Parthipan · Nan Rosemary Ke · Sai Rajeswar Mudumba · Alexandre de Brébisson · Jose Sotelo · Dendi A Suhubdy · Vincent Michalski · Joelle Pineau · Yoshua Bengio -
2017 Demonstration: Deep Robotic Learning using Visual Imagination and Meta-Learning »
Chelsea Finn · Frederik Ebert · Tianhe Yu · Annie Xie · Sudeep Dasari · Pieter Abbeel · Sergey Levine -
2017 Poster: One-Shot Imitation Learning »
Yan Duan · Marcin Andrychowicz · Bradly Stadie · OpenAI Jonathan Ho · Jonas Schneider · Ilya Sutskever · Pieter Abbeel · Wojciech Zaremba -
2017 Poster: Multitask Spectral Learning of Weighted Automata »
Guillaume Rabusseau · Borja Balle · Joelle Pineau -
2017 Poster: Value Prediction Network »
Junhyuk Oh · Satinder Singh · Honglak Lee -
2016 Workshop: Deep Reinforcement Learning »
David Silver · Satinder Singh · Pieter Abbeel · Peter Chen -
2016 Poster: Learning values across many orders of magnitude »
Hado van Hasselt · Arthur Guez · Arthur Guez · Matteo Hessel · Volodymyr Mnih · David Silver -
2016 Poster: Backprop KF: Learning Discriminative Deterministic State Estimators »
Tuomas Haarnoja · Anurag Ajay · Sergey Levine · Pieter Abbeel -
2016 Poster: Learning to Poke by Poking: Experiential Learning of Intuitive Physics »
Pulkit Agrawal · Ashvin Nair · Pieter Abbeel · Jitendra Malik · Sergey Levine -
2016 Oral: Learning to Poke by Poking: Experiential Learning of Intuitive Physics »
Pulkit Agrawal · Ashvin Nair · Pieter Abbeel · Jitendra Malik · Sergey Levine -
2016 Poster: Combinatorial Energy Learning for Image Segmentation »
Jeremy Maitin-Shepard · Viren Jain · Michal Januszewski · Peter Li · Pieter Abbeel -
2016 Poster: InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets »
Xi Chen · Peter Chen · Yan Duan · Rein Houthooft · John Schulman · Ilya Sutskever · Pieter Abbeel -
2016 Poster: VIME: Variational Information Maximizing Exploration »
Rein Houthooft · Xi Chen · Peter Chen · Yan Duan · John Schulman · Filip De Turck · Pieter Abbeel -
2016 Poster: Value Iteration Networks »
Aviv Tamar · Sergey Levine · Pieter Abbeel · YI WU · Garrett Thomas -
2016 Oral: Value Iteration Networks »
Aviv Tamar · Sergey Levine · Pieter Abbeel · YI WU · Garrett Thomas -
2016 Poster: Cooperative Inverse Reinforcement Learning »
Dylan Hadfield-Menell · Stuart J Russell · Pieter Abbeel · Anca Dragan -
2016 Tutorial: Deep Reinforcement Learning Through Policy Optimization »
Pieter Abbeel · John Schulman -
2015 Workshop: Deep Reinforcement Learning »
Pieter Abbeel · John Schulman · Satinder Singh · David Silver -
2015 Poster: Gradient Estimation Using Stochastic Computation Graphs »
John Schulman · Nicolas Heess · Theophane Weber · Pieter Abbeel -
2015 Poster: Action-Conditional Video Prediction using Deep Networks in Atari Games »
Junhyuk Oh · Xiaoxiao Guo · Honglak Lee · Richard L Lewis · Satinder Singh -
2015 Spotlight: Action-Conditional Video Prediction using Deep Networks in Atari Games »
Junhyuk Oh · Xiaoxiao Guo · Honglak Lee · Richard L Lewis · Satinder Singh -
2015 Poster: Learning Continuous Control Policies by Stochastic Value Gradients »
Nicolas Heess · Gregory Wayne · David Silver · Timothy Lillicrap · Tom Erez · Yuval Tassa -
2014 Workshop: Novel Trends and Applications in Reinforcement Learning »
Csaba Szepesvari · Marc Deisenroth (he/him) · Sergey Levine · Pedro Ortega · Brian Ziebart · Emma Brunskill · Naftali Tishby · Gerhard Neumann · Daniel Lee · Sridhar Mahadevan · Pieter Abbeel · David Silver · Vicenç Gómez -
2014 Workshop: From Bad Models to Good Policies (Sequential Decision Making under Uncertainty) »
Odalric-Ambrym Maillard · Timothy A Mann · Shie Mannor · Jeremie Mary · Laurent Orseau · Thomas Dietterich · Ronald Ortner · Peter Grünwald · Joelle Pineau · Raphael Fonteneau · Georgios Theocharous · Esteban D Arcaute · Christos Dimitrakakis · Nan Jiang · Doina Precup · Pierre-Luc Bacon · Marek Petrik · Aviv Tamar -
2014 Workshop: Autonomously Learning Robots »
Gerhard Neumann · Joelle Pineau · Peter Auer · Marc Toussaint -
2014 Poster: Learning Neural Network Policies with Guided Policy Search under Unknown Dynamics »
Sergey Levine · Pieter Abbeel -
2014 Demonstration: SmartWheeler – A smart robotic wheelchair platform »
Martin Gerdzhev · Joelle Pineau · Angus Leigh · Andrew Sutcliffe -
2014 Spotlight: Learning Neural Network Policies with Guided Policy Search under Unknown Dynamics »
Sergey Levine · Pieter Abbeel -
2014 Poster: Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning »
Xiaoxiao Guo · Satinder Singh · Honglak Lee · Richard L Lewis · Xiaoshi Wang -
2013 Poster: Reward Mapping for Transfer in Long-Lived Agents »
Xiaoxiao Guo · Satinder Singh · Richard L Lewis -
2013 Poster: Learning from Limited Demonstrations »
Beomjoon Kim · Amir-massoud Farahmand · Joelle Pineau · Doina Precup -
2013 Poster: Bellman Error Based Feature Generation using Random Projections on Sparse Spaces »
Mahdi Milani Fard · Yuri Grinberg · Amir-massoud Farahmand · Joelle Pineau · Doina Precup -
2013 Spotlight: Learning from Limited Demonstrations »
Beomjoon Kim · Amir-massoud Farahmand · Joelle Pineau · Doina Precup -
2013 Session: Oral Session 9 »
Satinder Singh -
2012 Poster: Near Optimal Chernoff Bounds for Markov Decision Processes »
Teodor Mihai Moldovan · Pieter Abbeel -
2012 Spotlight: Near Optimal Chernoff Bounds for Markov Decision Processes »
Teodor Mihai Moldovan · Pieter Abbeel -
2012 Poster: On-line Reinforcement Learning Using Incremental Kernel-Based Stochastic Factorization »
Andre S Barreto · Doina Precup · Joelle Pineau -
2011 Session: Oral Session 10 »
Joelle Pineau -
2011 Poster: Reinforcement Learning using Kernel-Based Stochastic Factorization »
Andre S Barreto · Doina Precup · Joelle Pineau -
2010 Workshop: Learning and Planning from Batch Time Series Data »
Daniel Lizotte · Michael Bowling · Susan Murphy · Joelle Pineau · Sandeep Vijan -
2010 Spotlight: On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient »
Jie Tang · Pieter Abbeel -
2010 Poster: On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient »
Jie Tang · Pieter Abbeel -
2010 Poster: PAC-Bayesian Model Selection for Reinforcement Learning »
Mahdi Milani Fard · Joelle Pineau -
2010 Poster: Reward Design via Online Gradient Ascent »
Jonathan D Sorg · Satinder Singh · Richard L Lewis -
2009 Poster: Manifold Embeddings for Model-Based Reinforcement Learning under Partial Observability »
Keith Bush · Joelle Pineau -
2008 Poster: Simple Local Models for Complex Dynamical Systems »
Erik Talvitie · Satinder Singh -
2008 Oral: Simple Local Models for Complex Dynamical Systems »
Erik Talvitie · Satinder Singh -
2008 Poster: MDPs with Non-Deterministic Policies »
Mahdi Milani Fard · Joelle Pineau -
2007 Oral: Exponential Family Predictive Representations of State »
David Wingate · Satinder Singh -
2007 Poster: Exponential Family Predictive Representations of State »
David Wingate · Satinder Singh -
2007 Spotlight: Bayes-Adaptive POMDPs »
Stephane Ross · Brahim Chaib-draa · Joelle Pineau -
2007 Spotlight: Hierarchical Apprenticeship Learning with Application to Quadruped Locomotion »
J. Zico Kolter · Pieter Abbeel · Andrew Y Ng -
2007 Poster: Bayes-Adaptive POMDPs »
Stephane Ross · Brahim Chaib-draa · Joelle Pineau -
2007 Poster: Hierarchical Apprenticeship Learning with Application to Quadruped Locomotion »
J. Zico Kolter · Pieter Abbeel · Andrew Y Ng -
2007 Poster: Theoretical Analysis of Heuristic Search Methods for Online POMDPs »
Stephane Ross · Joelle Pineau · Brahim Chaib-draa -
2006 Poster: Max-margin classification of incomplete data »
Gal Chechik · Geremy Heitz · Gal Elidan · Pieter Abbeel · Daphne Koller -
2006 Spotlight: Max-margin classification of incomplete data »
Gal Chechik · Geremy Heitz · Gal Elidan · Pieter Abbeel · Daphne Koller -
2006 Poster: An Application of Reinforcement Learning to Aerobatic Helicopter Flight »
Pieter Abbeel · Adam P Coates · Andrew Y Ng · Morgan Quigley -
2006 Talk: An Application of Reinforcement Learning to Aerobatic Helicopter Flight »
Pieter Abbeel · Adam P Coates · Andrew Y Ng · Morgan Quigley