Skip to yearly menu bar Skip to main content


Modeling Human Exploration Through Resource-Rational Reinforcement Learning

Marcel Binz · Eric Schulz

Hall J (level 1) #118

Keywords: [ Cognitive Science ] [ Exploration ] [ Meta-Learning ] [ Resource-Rationality ]


Equipping artificial agents with useful exploration mechanisms remains a challenge to this day. Humans, on the other hand, seem to manage the trade-off between exploration and exploitation effortlessly. In the present article, we put forward the hypothesis that they accomplish this by making optimal use of limited computational resources. We study this hypothesis by meta-learning reinforcement learning algorithms that sacrifice performance for a shorter description length (defined as the number of bits required to implement the given algorithm). The emerging class of models captures human exploration behavior better than previously considered approaches, such as Boltzmann exploration, upper confidence bound algorithms, and Thompson sampling. We additionally demonstrate that changing the description length in our class of models produces the intended effects: reducing description length captures the behavior of brain-lesioned patients while increasing it mirrors cognitive development during adolescence.

Chat is not available.