Skip to yearly menu bar Skip to main content


Poster

Diffusion Representation for Reinforcement Learning

Dmitry Shribak · Chen-Xiao Gao · Yitong Li · Chenjun Xiao · Bo Dai

West Ballroom A-D #6902
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Diffusion-based models have achieved notable empirical successes in reinforcement learning (RL) due to their expressiveness in modeling complex distributions. Despite existing methods being promising, the key challenge of extending them for broader real-world applications lies in the computational cost at inference time, i.e., sampling from a diffusion model is considerably slow as it often requires tens to hundreds of iterations to generate even one sample. To circumvent this issue, we propose to leverage the flexibility of diffusion models for RL from a representation learning perspective. In particular, by exploiting the connection between diffusion model and energy-based model, we develop Diff-Rep, a coherent algorithm framework that enables extracting sufficient representations for value functions in Markov decision processes (MDP) and partially observable Markov decision processes (POMDP). We further demonstrate how Diff-Rep facilitates efficient policy optimization and practical algorithms while explicitly bypassing the difficulty and inference cost of sampling from the diffusion model. Finally, we provide comprehensive empirical studies to verify the benefits of Diff-Rep in delivering robust and advantageous performance across various benchmarks with both fully and partially observable settings.

Live content is unavailable. Log in and register to view live content