Timezone: »

 
How to talk so AI will learn: instructions, descriptions, and pragmatics
Theodore Sumers · Robert Hawkins · Mark Ho · Tom Griffiths · Dylan Hadfield-Menell
Event URL: https://openreview.net/forum?id=Wfcbb0d7UEs »

Humans intuitively use language to express our beliefs and desires, but today we lack computational models explaining such abstract language use.To address this challenge, we consider social learning in a linear bandit setting and ask how a human might communicate preferences over behaviors (i.e. the reward function). We study two distinct types of language: instructions, which specify partial policies, and descriptions, which provide information about the reward function. To explain how humans use such language, we suggest they reason about both known present and unknown future states: instructions optimize for the present, while descriptions optimize for the future. We formalize this choice by extending reward design to consider a distribution over states.We then define a pragmatic listener agent that infers the speaker's reward function by reasoning about how the speaker expresses themselves. Simulations suggest that (1) descriptions afford stronger learning than instructions; and (2) maintaining uncertainty over the speaker's pedagogical intent allows for robust reward inference. We hope these insights facilitate a shift from developing agents that obey language to agents that learn from it.

Author Information

Theodore Sumers (Princeton University)
Theodore Sumers

My research uses reinforcement learning and decision theory to study human communication. Theoretically, I'm interested in explaining how societies accumulate information over generations. Practically, I hope to develop artificial systems capable of interacting with and learning from humans.

Robert Hawkins (Princeton University)
Mark Ho (New York University)
Tom Griffiths (Princeton University)
Dylan Hadfield-Menell (MIT)

More from the Same Authors