Skip to yearly menu bar Skip to main content


Learning convex bounds for linear quadratic control policy synthesis

Jack Umenberger · Thomas Schön

Room 517 AB #166

Keywords: [ Convex Optimization ] [ Decision and Control ]


Learning to make decisions from observed data in dynamic environments remains a problem of fundamental importance in a numbers of fields, from artificial intelligence and robotics, to medicine and finance. This paper concerns the problem of learning control policies for unknown linear dynamical systems so as to maximize a quadratic reward function. We present a method to optimize the expected value of the reward over the posterior distribution of the unknown system parameters, given data. The algorithm involves sequential convex programing, and enjoys reliable local convergence and robust stability guarantees. Numerical simulations and stabilization of a real-world inverted pendulum are used to demonstrate the approach, with strong performance and robustness properties observed in both.

Live content is unavailable. Log in and register to view live content