Timezone: »

 
Tutorial
Deep Reinforcement Learning Through Policy Optimization
Pieter Abbeel · John Schulman

Sun Dec 04 11:30 PM -- 01:30 AM (PST) @ Rooms 211 + 212

Deep Reinforcement Learning (Deep RL) has seen several breakthroughs in recent years. In this tutorial we will focus on recent advances in Deep RL through policy gradient methods and actor critic methods. These methods have shown significant success in a wide range of domains, including continuous-action domains such as manipulation, locomotion, and flight. They have also achieved the state of the art in discrete action domains such as Atari. Fundamentally, there are two types of gradient calculations: likelihood ratio gradients (aka score function gradients) and path derivative gradients (aka perturbation analysis gradients). We will teach policy gradient methods of each type, connect with Actor-Critic methods (which learn both a value function and a policy), and cover a generalized view of the computation of gradients of expectations through Stochastic Computation Graphs.

Learning Objectives:
The objective is to provide attendees with a good understanding of foundations as well as recent advances in policy gradient methods and actor critic methods. Approaches that will be taught: Likelihood Ratio Policy Gradient (REINFORCE), Natural Policy Gradient, Trust Region Policy Optimization (TRPO), Generalized Advantage Estimation (GAE), Asynchronous Advantage Actor Critic (A3C), Path Derivative Policy Gradients, (Deep) Deterministic Policy Gradient (DDPG), Stochastic Value Gradients (SVG), Guided Policy Search (GPS). As well as a generalized view of the computation of gradients of expectations through Stochastic Computation Graphs.

Target Audience: Machine learning researchers. RL background not assumed, but some prior familiarity with the basic concepts could be helpful. Good resource: Sutton and Barto Chapters 3 & 4 (http://webdocs.cs.ualberta.ca/~sutton/book/the-book.html).

Author Information

Pieter Abbeel (UC Berkeley & Covariant)

Pieter Abbeel is Professor and Director of the Robot Learning Lab at UC Berkeley [2008- ], Co-Director of the Berkeley AI Research (BAIR) Lab, Co-Founder of covariant.ai [2017- ], Co-Founder of Gradescope [2014- ], Advisor to OpenAI, Founding Faculty Partner AI@TheHouse venture fund, Advisor to many AI/Robotics start-ups. He works in machine learning and robotics. In particular his research focuses on making robots learn from people (apprenticeship learning), how to make robots learn through their own trial and error (reinforcement learning), and how to speed up skill acquisition through learning-to-learn (meta-learning). His robots have learned advanced helicopter aerobatics, knot-tying, basic assembly, organizing laundry, locomotion, and vision-based robotic manipulation. He has won numerous awards, including best paper awards at ICML, NIPS and ICRA, early career awards from NSF, Darpa, ONR, AFOSR, Sloan, TR35, IEEE, and the Presidential Early Career Award for Scientists and Engineers (PECASE). Pieter's work is frequently featured in the popular press, including New York Times, BBC, Bloomberg, Wall Street Journal, Wired, Forbes, Tech Review, NPR.

John Schulman (UC Berkeley)

John is a research scientist at OpenAI. Previously he was in the computer science PhD program at UC Berkeley, and before that he studied physics at Caltech. His research focuses on reinforcement learning, where he strives to develop systems that can match the impressive skills of mammals and birds for locomotion, navigation, and manipulation; and he is especially interested in applications in robotics. He previously performed research in (and is still interested in) neuroscience. Outside of work, he enjoys reading, running, and listening to jazz music.

More from the Same Authors