Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Memory in Artificial and Real Intelligence (MemARI)

Evidence accumulation in deep RL agents powered by a cognitive model

James Mochizuki-Freeman · Sahaj Singh Maini · Zoran Tiganj


Abstract:

Evidence accumulation is thought to be fundamental for decision-making in humans and other mammals. Neuroscience studies suggest that the hippocampus encodes a low-dimensional ordered representation of evidence through sequential neural activity. Cognitive modelers have proposed a mechanism by which such sequential activity could emerge through the modulation of recurrent weights with a change in the amount of evidence. Here we integrated a cognitive science model inside a deep Reinforcement Learning (RL) agent and trained the agent to perform a simple evidence accumulation task inspired by the behavioral experiments on animals. We compared the agent's performance with the performance of agents equipped with GRUs and RNNs. We found that the agent based on a cognitive model was able to learn much faster and generalize better while having significantly fewer parameters. This study illustrates how integrating cognitive models and deep learning systems can lead to brain-like neural representations that can improve learning.

Chat is not available.