Timezone: »

Joint Model-Policy Optimization of a Lower Bound for Model-Based RL
Benjamin Eysenbach · Alexander Khazatsky · Sergey Levine · Russ Salakhutdinov

Wed Nov 30 02:00 PM -- 04:00 PM (PST) @ Hall J #928

Many model-based reinforcement learning (RL) methods follow a similar template: fit a model to previously observed data, and then use data from that model for RL or planning. However, models that achieve better training performance (e.g., lower MSE) are not necessarily better for control: an RL agent may seek out the small fraction of states where an accurate model makes mistakes, or it might act in ways that do not expose the errors of an inaccurate model. As noted in prior work, there is an objective mismatch: models are useful if they yield good policies, but they are trained to maximize their accuracy, rather than the performance of the policies that result from them. In this work, we propose a single objective for jointly training the model and the policy, such that updates to either component increase a lower bound on expected return. To the best of our knowledge, this is the first lower bound for model-based RL that holds globally and can be efficiently estimated in continuous settings; it is the only lower bound that mends the objective mismatch problem. A version of this bound becomes tight under certain assumptions. Optimizing this bound resembles a GAN: a classifier distinguishes between real and fake transitions, the model is updated to produce transitions that look realistic, and the policy is updated to avoid states where the model predictions are unrealistic. Numerical simulations demonstrate that optimizing this bound yields reward maximizing policies and yields dynamics that (perhaps surprisingly) can aid in exploration. We also show that a deep RL algorithm loosely based on our lower bound can achieve performance competitive with prior model-based methods, and better performance on certain hard exploration tasks.

Author Information

Benjamin Eysenbach (CMU)
Benjamin Eysenbach

I'm a 5th year PhD student at CMU, focusing on RL algorithms. I am currently on the faculty job market.

Alexander Khazatsky (Stanford University)
Sergey Levine (UC Berkeley)
Russ Salakhutdinov (Carnegie Mellon University)

More from the Same Authors