Timezone: »

Probabilistic Approaches for Control and Robotics
Marc Deisenroth · Hilbert J Kappen · Emo Todorov · Duy Nguyen-Tuong · Carl Edward Rasmussen · Jan Peters

Fri Dec 11 07:30 AM -- 06:30 PM (PST) @ Westin: Alpine BC
Event URL: http://mlg.eng.cam.ac.uk/marc/nipsWS09 »

During the last decade, many areas of Bayesian machine learning have reached a high level of maturity. This has resulted in a variety of theoretically sound and efficient algorithms for learning and inference in the presence of uncertainty. However, in the context of control, robotics, and reinforcement learning, uncertainty has not yet been treated with comparable rigor despite its central role in risk-sensitive control, sensori-motor control, robust control, and cautious control. A consistent treatment of uncertainty is also essential when dealing with stochastic policies, incomplete state information, and exploration strategies.

A typical situation where uncertainty comes into play is when the exact state transition dynamics are unknown and only limited or no expert knowledge is available and/or affordable. One option is to learn a model from data. However, if the model is too far off, this approach can result in arbitrarily bad solutions. This model bias can be sidestepped by the use of flexible model-free methods. The disadvantage of model-free methods is that they do not generalize and often make less efficient use of data. Therefore, they often need more trials than feasible to solve a problem on a real-world system. A probabilistic model could be used for efficient use of data while alleviating model bias by explicitly representing and incorporating uncertainty.

The use of probabilistic approaches requires (approximate) inference algorithms, where Bayesian machine learning can come into play. Although probabilistic modeling and inference conceptually fit into this context, they are not widespread in robotics, control, and reinforcement learning. Hence, this workshop aims to bring researchers together to discuss the need, the theoretical properties, and the practical implications of probabilistic methods in control, robotics, and reinforcement learning.

One particular focus will be on probabilistic reinforcement learning approaches that profit recent developments in optimal control which show that the problem can be substantially simplified if certain structure is imposed. The simplifications include linearity of the (Hamilton-Jacobi) Bellman equation. The duality with Bayesian estimation allow for analytical computation of the optimal control laws and closed form expressions of the optimal value functions.

Author Information

Marc Deisenroth (University College London)
Marc Deisenroth

Professor Marc Deisenroth is the DeepMind Chair in Artificial Intelligence at University College London and the Deputy Director of UCL's Centre for Artificial Intelligence. He also holds a visiting faculty position at the University of Johannesburg and Imperial College London. Marc's research interests center around data-efficient machine learning, probabilistic modeling and autonomous decision making. Marc was Program Chair of EWRL 2012, Workshops Chair of RSS 2013, EXPO-Co-Chair of ICML 2020, and Tutorials Co-Chair of NeurIPS 2021. In 2019, Marc co-organized the Machine Learning Summer School in London. He received Paper Awards at ICRA 2014, ICCAS 2016, and ICML 2020. He is co-author of the book [Mathematics for Machine Learning](https://mml-book.github.io) published by Cambridge University Press (2020).

Hilbert J Kappen (Radboud University)
Emo Todorov (University of Washington)
Duy Nguyen-Tuong (Bosch Research)
Carl Edward Rasmussen (University of Cambridge)
Jan Peters (TU Darmstadt & MPI Intelligent Systems)

Jan Peters is a full professor (W3) for Intelligent Autonomous Systems at the Computer Science Department of the Technische Universitaet Darmstadt and at the same time a senior research scientist and group leader at the Max-Planck Institute for Intelligent Systems, where he heads the interdepartmental Robot Learning Group. Jan Peters has received the Dick Volz Best 2007 US PhD Thesis Runner-Up Award, the Robotics: Science & Systems - Early Career Spotlight, the INNS Young Investigator Award, and the IEEE Robotics & Automation Society‘s Early Career Award as well as numerous best paper awards. In 2015, he was awarded an ERC Starting Grant. Jan Peters has studied Computer Science, Electrical, Mechanical and Control Engineering at TU Munich and FernUni Hagen in Germany, at the National University of Singapore (NUS) and the University of Southern California (USC). He has received four Master‘s degrees in these disciplines as well as a Computer Science PhD from USC.

More from the Same Authors