Skip to yearly menu bar Skip to main content


Poster

Policy gradients in linearly-solvable MDPs

Emanuel Todorov


Abstract:

We present policy gradient results within the framework of linearly-solvable MDPs. For the first time, compatible function approximators and natural policy gradients are obtained by estimating the cost-to-go function, rather than the (much larger) state-action advantage function as is necessary in traditional MDPs. We also develop the first compatible function approximators and natural policy gradients for continuous-time stochastic systems.

Live content is unavailable. Log in and register to view live content