Timezone: »

Globally Convergent Policy Search for Output Estimation
Jack Umenberger · Max Simchowitz · Juan Perdomo · Kaiqing Zhang · Russ Tedrake

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #724
We introduce the first direct policy search algorithm which provably converges to the globally optimal dynamic filter for the classical problem of predicting the outputs of a linear dynamical system, given noisy, partial observations. Despite the ubiquity of partial observability in practice, theoretical guarantees for direct policy search algorithms, one of the backbones of modern reinforcement learning, have proven difficult to achieve. This is primarily due to the degeneracies which arise when optimizing over filters that maintain an internal state. In this paper, we provide a new perspective on this challenging problem based on the notion of informativity, which intuitively requires that all components of a filter’s internal state are representative of the true state of the underlying dynamical system. We show that informativity overcomes the aforementioned degeneracy. Specifically, we propose a regularizer which explicitly enforces informativity, and establish that gradient descent on this regularized objective - combined with a “reconditioning step” – converges to the globally optimal cost at a $O(1/T)$ rate.

Author Information

Jack Umenberger (Massachusetts Institute of Technology)
Max Simchowitz (MIT)
Juan Perdomo (University of California, Berkeley)
Kaiqing Zhang (Massachusetts Institute of Technology)
Russ Tedrake (MIT)

More from the Same Authors