Timezone: »

 
Poster
Fast deep reinforcement learning using online adjustments from the past
Steven Hansen · Alexander Pritzel · Pablo Sprechmann · Andre Barreto · Charles Blundell

Wed Dec 05 07:45 AM -- 09:45 AM (PST) @ Room 210 #34

We propose Ephemeral Value Adjusments (EVA): a means of allowing deep reinforcement learning agents to rapidly adapt to experience in their replay buffer. EVA shifts the value predicted by a neural network with an estimate of the value function found by prioritised sweeping over experience tuples from the replay buffer near the current state. EVA combines a number of recent ideas around combining episodic memory-like structures into reinforcement learning agents: slot-based storage, content-based retrieval, and memory-based planning. We show that EVA is performant on a demonstration task and Atari games.

Author Information

Steven Hansen (DeepMind)
Alexander Pritzel (Deepmind)
Pablo Sprechmann (DeepMind)
Andre Barreto (DeepMind)
Charles Blundell (DeepMind)

More from the Same Authors