Skip to yearly menu bar Skip to main content


Poster

Learning to Transduce with Unbounded Memory

Edward Grefenstette · Karl Moritz Hermann · Mustafa Suleyman · Phil Blunsom

210 C #16

Abstract:

Recently, strong results have been demonstrated by Deep Recurrent Neural Networks on natural language transduction problems. In this paper we explore the representational power of these models using synthetic grammars designed to exhibit phenomena similar to those found in real transduction problems such as machine translation. These experiments lead us to propose new memory-based recurrent networks that implement continuously differentiable analogues of traditional data structures such as Stacks, Queues, and DeQues. We show that these architectures exhibit superior generalisation performance to Deep RNNs and are often able to learn the underlying generating algorithms in our transduction experiments.

Live content is unavailable. Log in and register to view live content