Poster
Variational Memory Encoder-Decoder
Hung Le · Truyen Tran · Thin Nguyen · Svetha Venkatesh
Room 210 #87
Keywords: [ Deep Autoencoders ] [ Generative Models ] [ Memory-Augmented Neural Networks ] [ Dialog- or Communication-Based Learning ]
Introducing variability while maintaining coherence is a core task in learning to generate utterances in conversation. Standard neural encoder-decoder models and their extensions using conditional variational autoencoder often result in either trivial or digressive responses. To overcome this, we explore a novel approach that injects variability into neural encoder-decoder via the use of external memory as a mixture model, namely Variational Memory Encoder-Decoder (VMED). By associating each memory read with a mode in the latent mixture distribution at each timestep, our model can capture the variability observed in sequential data such as natural conversations. We empirically compare the proposed model against other recent approaches on various conversational datasets. The results show that VMED consistently achieves significant improvement over others in both metric-based and qualitative evaluations.
Live content is unavailable. Log in and register to view live content