Skip to yearly menu bar Skip to main content


Poster

Gradient Informed Proximal Policy Optimization

Sanghyun Son · Laura Zheng · Ryan Sullivan · Yi-Ling Qiao · Ming Lin

Great Hall & Hall B1+B2 (level 1) #1312
[ ]
[ Paper [ Slides [ Poster [ OpenReview
Thu 14 Dec 8:45 a.m. PST — 10:45 a.m. PST

Abstract:

We introduce a novel policy learning method that integrates analytical gradients from differentiable environments with the Proximal Policy Optimization (PPO) algorithm. To incorporate analytical gradients into the PPO framework, we introduce the concept of an α-policy that stands as a locally superior policy. By adaptively modifying the α value, we can effectively manage the influence of analytical policy gradients during learning. To this end, we suggest metrics for assessing the variance and bias of analytical gradients, reducing dependence on these gradients when high variance or bias is detected. Our proposed approach outperforms baseline algorithms in various scenarios, such as function optimization, physics simulations, and traffic control environments. Our code can be found online: https://github.com/SonSang/gippo.

Chat is not available.