Timezone: »

 
Poster
Learning non-Markovian Decision-Making from State-only Sequences
Aoyang Qin · Feng Gao · Qing Li · Song-Chun Zhu · Sirui Xie

Wed Dec 13 03:00 PM -- 05:00 PM (PST) @ Great Hall & Hall B1+B2 #1401
Conventional imitation learning assumes access to the actions of demonstrators, but these motor signals are often non-observable in naturalistic settings. Additionally, sequential decision-making behaviors in these settings can deviate from the assumptions of a standard Markov Decision Process (MDP). To address these challenges, we explore deep generative modeling of state-only sequences with non-Markov Decision Process (nMDP), where the policy is an energy-based prior in the latent space of the state transition generator. We develop maximum likelihood estimation to achieve model-based imitation, which involves short-run MCMC sampling from the prior and importance sampling for the posterior. The learned model enables $\textit{decision-making as inference}$: model-free policy execution is equivalent to prior sampling, model-based planning is posterior sampling initialized from the policy. We demonstrate the efficacy of the proposed method in a prototypical path planning task with non-Markovian constraints and show that the learned model exhibits strong performances in challenging domains from the MuJoCo suite.

Author Information

Aoyang Qin (Tsinghua University)
Feng Gao (UCLA)
Qing Li (UCLA)
Song-Chun Zhu (UCLA)
Sirui Xie (Google DeepMind, UCLA)

More from the Same Authors