Timezone: »

Unsupervised Text Style Transfer using Language Models as Discriminators
Zichao Yang · Zhiting Hu · Chris Dyer · Eric Xing · Taylor Berg-Kirkpatrick

Tue Dec 04 07:45 AM -- 09:45 AM (PST) @ Room 517 AB #155

Binary classifiers are employed as discriminators in GAN-based unsupervised style transfer models to ensure that transferred sentences are similar to sentences in the target domain. One difficulty with the binary discriminator is that error signal is sometimes insufficient to train the model to produce rich-structured language. In this paper, we propose a technique of using a target domain language model as the discriminator to provide richer, token-level feedback during the learning process. Because our language model scores sentences directly using a product of locally normalized probabilities, it offers more stable and more useful training signal to the generator. We train the generator to minimize the negative log likelihood (NLL) of generated sentences evaluated by a language model. By using continuous approximation of the discrete samples, our model can be trained using back-propagation in an end-to-end way. Moreover, we find empirically with a language model as a structured discriminator, it is possible to eliminate the adversarial training steps using negative samples, thus making training more stable. We compare our model with previous work using convolutional neural networks (CNNs) as discriminators and show our model outperforms them significantly in three tasks including word substitution decipherment, sentiment modification and related language translation.

Author Information

Zichao Yang (Carnegie Mellon University)
Zhiting Hu (Carnegie Mellon University)
Chris Dyer (DeepMind)
Eric Xing (Petuum Inc. / Carnegie Mellon University)
Taylor Berg-Kirkpatrick (Carnegie Mellon University)

More from the Same Authors