Timezone: »

A Benchmark for Systematic Generalization in Grounded Language Understanding
Laura Ruis · Jacob Andreas · Marco Baroni · Diane Bouchacourt · Brenden Lake

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1563

Humans easily interpret expressions that describe unfamiliar situations composed from familiar parts ("greet the pink brontosaurus by the ferris wheel"). Modern neural networks, by contrast, struggle to interpret novel compositions. In this paper, we introduce a new benchmark, gSCAN, for evaluating compositional generalization in situated language understanding. Going beyond a related benchmark that focused on syntactic aspects of generalization, gSCAN defines a language grounded in the states of a grid world, facilitating novel evaluations of acquiring linguistically motivated rules. For example, agents must understand how adjectives such as 'small' are interpreted relative to the current world state or how adverbs such as 'cautiously' combine with new verbs. We test a strong multi-modal baseline model and a state-of-the-art compositional method finding that, in most cases, they fail dramatically when generalization requires systematic compositional rules.

Author Information

Laura Ruis (University of Amsterdam)
Jacob Andreas (MIT)
Marco Baroni (Facebook Artificial Intelligence Research)
Diane Bouchacourt (Facebook AI)
Brenden Lake (New York University)

More from the Same Authors