Timezone: »
Score function-based natural language generation (NLG) approaches such as REINFORCE, in general, suffer from low sample efficiency and training instability problems. This is mainly due to the non-differentiable nature of the discrete space sampling and thus these methods have to treat the discriminator as a black box and ignore the gradient information. To improve the sample efficiency and reduce the variance of REINFORCE, we propose a novel approach, TaylorGAN, which augments the gradient estimation by off-policy update and the first-order Taylor expansion. This approach enables us to train NLG models from scratch with smaller batch size --- without maximum likelihood pre-training, and outperforms existing GAN-based methods on multiple metrics of quality and diversity.
Author Information
Chun-Hsing Lin (National Taiwan University)
Siang-Ruei Wu (National Taiwan University)
Hung-yi Lee (National Taiwan University)
Yun-Nung Chen (National Taiwan University)
More from the Same Authors
-
2021 : Live Q&A session: Hung-Yi Lee (National Taiwan University) »
Hung-yi Lee -
2021 : Invited Talk: Hung-Yi Lee (National Taiwan University) »
Hung-yi Lee -
2020 Workshop: Self-Supervised Learning for Speech and Audio Processing »
Abdelrahman Mohamed · Hung-yi Lee · Shinji Watanabe · Shang-Wen Li · Tara Sainath · Karen Livescu -
2020 : Opening remarks »
Hung-yi Lee