Timezone: »

 
Poster
A Kernel Loss for Solving the Bellman Equation
Yihao Feng · Lihong Li · Qiang Liu

Thu Dec 12 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #223

Value function learning plays a central role in many state-of-the-art reinforcement learning algorithms. Many popular algorithms like Q-learning do not optimize any objective function, but are fixed-point iterations of some variants of Bellman operator that are not necessarily a contraction. As a result, they may easily lose convergence guarantees, as can be observed in practice. In this paper, we propose a novel loss function, which can be optimized using standard gradient-based methods with guaranteed convergence. The key advantage is that its gradient can be easily approximated using sampled transitions, avoiding the need for double samples required by prior algorithms like residual gradient. Our approach may be combined with general function classes such as neural networks, using either on- or off-policy data, and is shown to work reliably and effectively in several benchmarks, including classic problems where standard algorithms are known to diverge.

Author Information

Yihao Feng (UT Austin)

I am a Ph.D student at UT Austin, where I work on Reinforcement Learning and Approximate Inference. I am looking for internships for summer 2020! Please feel free to contact me (yihao AT cs.utexas.edu) if you have open positions!

Lihong Li (Google Research)
Qiang Liu (UT Austin)

More from the Same Authors