Timezone: »

 
Spotlight
Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation
Yuhuai Wu · Elman Mansimov · Roger Grosse · Shun Liao · Jimmy Ba

Wed Dec 06 03:30 PM -- 03:35 PM (PST) @ Hall A

In this work we propose to apply second-order optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to the curvature. We extend the framework of natural policy gradient and propose to optimize both the actor and the critic using Kronecker-factored approximate curvature (K-FAC) with trust region, hence naming our method Actor Critic using Kronecker-factored Trust Region method (ACKTR). We tested our approach across discrete domains in Atari games as well as continuous domains in Mujoco environment. With the proposed methods, we are able to achieve higher rewards and a 2 to 3 fold improvement in sample efficiency. To the best of our knowledge, we are also the first to succeed training several nontrivial tasks in Mujoco environment directly from image (rather than state space) observations.

Author Information

Yuhuai Wu (University of Toronto)
Elman Mansimov (New York University)
Roger Grosse (University of Toronto)
Shun Liao (University of Toronto)
Jimmy Ba (University of Toronto / Vector Institute)

More from the Same Authors