Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 4th Robot Learning Workshop: Self-Supervised and Lifelong Learning

Variational Inference MPC for Robot Motion with Normalizing Flows

Thomas Power · Dmitry Berenson


Abstract:

In this paper, we propose an MPC method for robot motion by formulating MPC as Bayesian Inference. We propose using amortized variational inference to approximate the posterior with a normalizing flow conditioned on the start, goal and environment. By using a normalizing flow to represent the posterior, we are able to model complex distributions. This is important for robotics, where real environments impose difficult constraints on trajectories. We also present an approach for generalizing the learned sampling distribution to novel environments outside the training distribution. We demonstrate that our approach generalizes to a difficult novel environment and outperform a baseline sampling-based MPC method on a navigation problem.

Chat is not available.