Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

An operator view of policy gradient methods

Dibya Ghosh · Marlos C. Machado · Nicolas Le Roux

Poster Session 1 #196

Abstract: We cast policy gradient methods as the repeated application of two operators: a policy improvement operator I, which maps any policy π to a better one Iπ, and a projection operator P, which finds the best approximation of Iπ in the set of realizable policies. We use this framework to introduce operator-based versions of well-known policy gradient methods such as REINFORCE and PPO, which leads to a better understanding of their original counterparts. We also use the understanding we develop of the role of I and P to propose a new global lower bound of the expected return. This new perspective allows us to further bridge the gap between policy-based and value-based methods, showing how REINFORCE and the Bellman optimality operator, for example, can be seen as two sides of the same coin.

Chat is not available.