Skip to yearly menu bar Skip to main content


Poster

Regularized Off-Policy TD-Learning

Bo Liu · Sridhar Mahadevan · Ji Liu

Harrah’s Special Events Center 2nd Floor

Abstract: We present a novel $l_1$ regularized off-policy convergent TD-learning method (termed RO-TD), which is able to learn sparse representations of value functions with low computational complexity. The algorithmic framework underlying RO-TD integrates two key ideas: off-policy convergent gradient TD methods, such as TDC, and a convex-concave saddle-point formulation of non-smooth convex optimization, which enables first-order solvers and feature selection using online convex regularization. A detailed theoretical and experimental analysis of RO-TD is presented. A variety of experiments are presented to illustrate the off-policy convergence, sparse feature selection capability and low computational cost of the RO-TD algorithm.

Live content is unavailable. Log in and register to view live content