Skip to yearly menu bar Skip to main content

Workshop: Intrinsically Motivated Open-ended Learning (IMOL) Workshop

Neuro-Inspired Fragmentation and Recall to Overcome Catastrophic Forgetting in Curiosity

Jaedong Hwang · Zhang-Wei Hong · Eric Chen · Akhilan Boopathy · Pulkit Agrawal · Ila Fiete

Keywords: [ curiosity ] [ Intrinsic reward ] [ Reinforcement Learning ] [ catastrophic forgetting ] [ Neuroscience ] [ Memory ]


Intrinsic reward function is widely used to improve the exploration in reinforcement learning. We first examine the conditions and causes of catastrophic forgetting of the intrinsic reward function, and propose a new method, FarCuriosity, inspired by how humans and non-human animals learn. The method depends on fragmentation and recall: an agent fragments an environment based on surprisal signals, and uses different local curiosity modules (prediction-based intrinsic reward functions) for each division so that modules are not trained on the entire environment. At fragmentation event, the agent stores the current module in long-term memory (LTM) and either initializes a new module or recalls a previously stored module based on its match with the current state. With fragmentation and recall, FarCuriosity achieves less forgetting and better overall performance in games with varied and heterogeneous environments in the Atari benchmark suite of tasks. Thus, this work highlights the problem of catastrophic forgetting in prediction-based curiosity methods and proposes a first solution.

Chat is not available.