Timezone: »

 
Oral
Linear Complementarity for Regularized Policy Evaluation and Improvement
Jeff Johns · Christopher Painter-Wakefield · Ronald Parr

Wed Dec 08 09:20 AM -- 09:40 AM (PST) @ Regency Ballroom

Recent work in reinforcement learning has emphasized the power of L1 regularization to perform feature selection and prevent overfitting. We propose formulating the L1 regularized linear fixed point problem as a linear complementarity problem (LCP). This formulation offers several advantages over the LARS-inspired formulation, LARS-TD. The LCP formulation allows the use of efficient off-the-shelf solvers, leads to a new uniqueness result, and can be initialized with starting points from similar problems (warm starts). We demonstrate that warm starts, as well as the efficiency of LCP solvers, can speed up policy iteration. Moreover, warm starts permit a form of modified policy iteration that can be used to approximate a "greedy" homotopy path, a generalization of the LARS-TD homotopy path that combines policy evaluation and optimization.

Author Information

Jeff Johns (US Government)
Christopher Painter-Wakefield (Duke University)
Ronald Parr (Duke University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors