Skip to yearly menu bar Skip to main content


Poster

Generative Adversarial Imitation Learning

Jonathan Ho · Stefano Ermon

Area 5+6+7+8 #40

Keywords: [ Reinforcement Learning Algorithms ] [ (Other) Robotics and Control ] [ Deep Learning or Neural Networks ]


Abstract:

Consider learning a policy from example expert behavior, without interaction with the expert or access to a reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments.

Live content is unavailable. Log in and register to view live content