Skip to yearly menu bar Skip to main content


Poster

Temporal Difference Based Actor Critic Learning - Convergence and Neural Implementation

Dotan Di Castro · Dima Volkinshtein · Ron Meir


Abstract:

Actor-critic algorithms for reinforcement learning are achieving renewed popularity due to their good convergence properties in situations where other approaches often fail (e.g., when function approximation is involved). Interestingly, there is growing evidence that actor-critic approaches based on phasic dopamine signals play a key role in biological learning through the cortical and basal ganglia. We derive a temporal difference based actor critic learning algorithm, for which convergence can be proved without assuming separate time scales for the actor and the critic. The approach is demonstrated by applying it to networks of spiking neurons. The established relation between phasic dopamine and the temporal difference signal lends support to the biological relevance of such algorithms.

Live content is unavailable. Log in and register to view live content