Timezone: »

Regularized Off-Policy TD-Learning
Bo Liu · Sridhar Mahadevan · Ji Liu

Wed Dec 05 11:44 AM -- 11:48 AM (PST) @ Harveys Convention Center Floor, CC
We present a novel $l_1$ regularized off-policy convergent TD-learning method (termed RO-TD), which is able to learn sparse representations of value functions with low computational complexity. The algorithmic framework underlying RO-TD integrates two key ideas: off-policy convergent gradient TD methods, such as TDC, and a convex-concave saddle-point formulation of non-smooth convex optimization, which enables first-order solvers and feature selection using online convex regularization. A detailed theoretical and experimental analysis of RO-TD is presented. A variety of experiments are presented to illustrate the off-policy convergence, sparse feature selection capability and low computational cost of the RO-TD algorithm.

Author Information

Bo Liu (Auburn University)
Sridhar Mahadevan (UMass Amherst)
Ji Liu (University Wisconsin-Madison)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors