Skip to yearly menu bar Skip to main content


Poster

Learning to Play With Intrinsically-Motivated, Self-Aware Agents

Nick Haber · Damian Mrowca · Stephanie Wang · Li Fei-Fei · Daniel Yamins

Room 210 #68

Keywords: [ Exploration ] [ Human or Animal Learning ] [ Virtual Environments ] [ Model-Based RL ] [ Reinforcement Learning ] [ Cognitive Science ] [ Representation Learning ] [ Adversarial Networks ] [ Unsupervised Learning ] [ Active Learning ]


Abstract:

Infants are experts at playing, with an amazing ability to generate novel structured behaviors in unstructured environments that lack clear extrinsic reward signals. We seek to mathematically formalize these abilities using a neural network that implements curiosity-driven intrinsic motivation. Using a simple but ecologically naturalistic simulated environment in which an agent can move and interact with objects it sees, we propose a "world-model" network that learns to predict the dynamic consequences of the agent's actions. Simultaneously, we train a separate explicit "self-model" that allows the agent to track the error map of its world-model. It then uses the self-model to adversarially challenge the developing world-model. We demonstrate that this policy causes the agent to explore novel and informative interactions with its environment, leading to the generation of a spectrum of complex behaviors, including ego-motion prediction, object attention, and object gathering. Moreover, the world-model that the agent learns supports improved performance on object dynamics prediction, detection, localization and recognition tasks. Taken together, our results are initial steps toward creating flexible autonomous agents that self-supervise in realistic physical environments.

Live content is unavailable. Log in and register to view live content