`

Timezone: »

 
Poster
Temporal Difference with Eligibility Traces Derived from First Principles
Marcus Hutter · Shane Legg

Tue Dec 04 10:30 AM -- 10:40 AM (PST) @ None #None

We derive an equation for temporal difference learning from statistical first principles. Specifically, we start with the variational principle and then bootstrap to produce an updating rule for discounted state value estimates. The resulting equation is similar to the standard equation for temporal difference learning with eligibility traces, so called TD(λ), however it lacks the parameter α that specifies the learning rate. In the place of this free parameter there is now an equation for the learning rate that is specific to each state transition. We experimentally test this new learning rule against TD(λ) and find that it offers superior performance in various settings. Finally, we make some preliminary investigations into how to extend our new temporal difference algorithm to reinforcement learning. To do this we combine our update equation with both Watkin’s Q(λ) and Sarsa(λ) and find that it again offers superior performance with fewer parameters.

Author Information

Marcus Hutter (Australian National University)
Shane Legg (DeepMind)

More from the Same Authors