Timezone: »

 
Poster
Model-based Reinforcement Learning for Semi-Markov Decision Processes with Neural ODEs
Jianzhun Du · Joseph Futoma · Finale Doshi-Velez

Mon Dec 07 09:00 PM -- 11:00 PM (PST) @ Poster Session 0 #147

We present two elegant solutions for modeling continuous-time dynamics, in a novel model-based reinforcement learning (RL) framework for semi-Markov decision processes (SMDPs), using neural ordinary differential equations (ODEs). Our models accurately characterize continuous-time dynamics and enable us to develop high-performing policies using a small amount of data. We also develop a model-based approach for optimizing time schedules to reduce interaction rates with the environment while maintaining the near-optimal performance, which is not possible for model-free methods. We experimentally demonstrate the efficacy of our methods across various continuous-time domains.

Author Information

Jianzhun Du (Harvard University)
Joseph Futoma (Harvard University)
Finale Doshi-Velez (Harvard)

More from the Same Authors