Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Memory in Artificial and Real Intelligence (MemARI)

Using Hippocampal Replay to Consolidate Experiences in Memory-Augmented Reinforcement Learning

Chong Min John Tan · Mehul Motani

Keywords: [ memory augmentation ] [ Reinforcement Learning ] [ go-explore ] [ hippocampal replay ] [ count-based ] [ memory consolidation ]


Abstract:

Reinforcement Learning (RL) agents traditionally face difficulties to learn in sparse reward settings. Go-Explore is a state-of-the-art algorithm that learns well in spite of sparse reward, largely due to storing experiences in external memory and updating this memory with better trajectories. We improve upon this method and introduce a more efficient count-based approach for both state selection (Go") and exploration (Explore") phases, as well as perform a novel form of hippocampal replay inspired from sharp-wave ripples (SWR) during hippocampal replay in mice to consolidate successful trajectories and enable consistent performance.

Chat is not available.