`

Timezone: »

 
Poster
Structured Reordering for Modeling Latent Alignments in Sequence Transduction
bailin wang · Mirella Lapata · Ivan Titov

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None

Despite success in many domains, neural models struggle in settings where train and test examples are drawn from different distributions. In particular, in contrast to humans, conventional sequence-to-sequence (seq2seq) models fail to generalize systematically, i.e., interpret sentences representing novel combinations of concepts (e.g., text segments) seen in training. Traditional grammar formalisms excel in such settings by implicitly encoding alignments between input and output segments, but are hard to scale and maintain. Instead of engineering a grammar, we directly model segment-to-segment alignments as discrete structured latent variables within a neural seq2seq model. To efficiently explore the large space of alignments, we introduce a reorder-first align-later framework whose central component is a neural reordering module producing separable permutations. We present an efficient dynamic programming algorithm performing exact marginal inference of separable permutations, and, thus, enabling end-to-end differentiable training of our model. The resulting seq2seq model exhibits better systematic generalization than standard models on synthetic problems and NLP tasks (i.e., semantic parsing and machine translation).

Author Information

bailin wang (University of Edinburgh)
Mirella Lapata (University of Edinburgh)
Ivan Titov (University of Edinburgh / University of Amsterdam)

More from the Same Authors