Timezone: »

 
Poster
Error Propagation for Approximate Policy and Value Iteration
Amir-massoud Farahmand · Remi Munos · Csaba Szepesvari

Mon Dec 06 12:00 AM -- 12:00 AM (PST) @ None #None

We address the question of how the approximation error/Bellman residual at each iteration of the Approximate Policy/Value Iteration algorithms influences the quality of the resulted policy. We quantify the performance loss as the Lp norm of the approximation error/Bellman residual at each iteration. Moreover, we show that the performance loss depends on the expectation of the squared Radon-Nikodym derivative of a certain distribution rather than its supremum -- as opposed to what has been suggested by the previous results. Also our results indicate that the contribution of the approximation/Bellman error to the performance loss is more prominent in the later iterations of API/AVI, and the effect of an error term in the earlier iterations decays exponentially fast.

Author Information

Amir-massoud Farahmand (Vector Institute)
Remi Munos (Google DeepMind)
Csaba Szepesvari (DeepMind / University of Alberta)

More from the Same Authors