Skip to yearly menu bar Skip to main content


Contributed Talk
in
Workshop: Let's Discuss: Learning Methods for Dialogue

Generative Deep Neural Networks for Dialogue: A Short Review

Iulian Vlad Serban

[ ] [ Project Page ]
2016 Contributed Talk

Abstract:

Researchers have recently started investigating deep neural networks for dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq) models have shown promising results for unstructured tasks, such as word-level dialogue response generation. The hope is that such models will be able to leverage massive amounts of data to learn meaningful natural language representations and response generation strategies, while requiring a minimum amount of domain knowledge and hand-crafting. We review recently proposed models based on generative encoder-decoder neural network architectures, and show that these models have better ability to incorporate long-term dialogue history, to model uncertainty and ambiguity in dialogue, and to generate responses with high-level compositional structure.

Live content is unavailable. Log in and register to view live content