Skip to yearly menu bar Skip to main content


Poster

Using natural language and program abstractions to instill human inductive biases in machines

Sreejan Kumar · Carlos G. Correa · Ishita Dasgupta · Raja Marjieh · Michael Y Hu · Robert Hawkins · Jonathan D Cohen · nathaniel daw · Karthik Narasimhan · Tom Griffiths

Hall J (level 1) #942

Keywords: [ natural language ] [ human intelligence ] [ Reinforcement Learning ] [ Cognitive Science ] [ Meta-Learning ] [ Program Induction ]

award Outstanding Paper
[ ]
[ Paper [ Poster [ OpenReview

Abstract:

Strong inductive biases give humans the ability to quickly learn to perform a variety of tasks. Although meta-learning is a method to endow neural networks with useful inductive biases, agents trained by meta-learning may sometimes acquire very different strategies from humans. We show that co-training these agents on predicting representations from natural language task descriptions and programs induced to generate such tasks guides them toward more human-like inductive biases. Human-generated language descriptions and program induction models that add new learned primitives both contain abstract concepts that can compress description length. Co-training on these representations result in more human-like behavior in downstream meta-reinforcement learning agents than less abstract controls (synthetic language descriptions, program induction without learned primitives), suggesting that the abstraction supported by these representations is key.

Chat is not available.