Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Agent Learning in Open-Endedness Workshop

What can AI Learn from Human Exploration? Intrinsically-Motivated Humans and Agents in Open-World Exploration

Yuqing Du · Eliza Kosoy · Alyssa L Dayan · Maria Rufova · Alison Gopnik · Pieter Abbeel

Keywords: [ Minecraft ] [ Exploration ] [ humans ] [ RL ] [ intrinsic motivation ]


Abstract:

What drives exploration? Understanding intrinsic motivation is a long-standing question in both cognitive science and artificial intelligence (AI); numerous exploration objectives have been proposed and tested in human experiments and used to train reinforcement learning (RL) agents. However, experiments in the former are often in simplistic environments that do not capture the complexity of real world exploration. On the other hand, experiments in the latter use more complex environments yet the trained RL agents fail to come close to human exploration. In this work we directly compare human and agent exploration in a shared open-ended environment, Crafter (Hafner 2021). We study how well commonly-proposed information theoretic objectives for intrinsic motivation relate to actual human and agent behaviors, finding that human exploration consistently correlates significantly with entropy, information gain, and empowerment, whereas intrinsically-motivated RL agent exploration does not. We also analyze self-talk during play and find that children's verbalizations exhibit a significant relationship with empowerment and entropy, whereas adult verbalizations do not.

Chat is not available.