Zhicheng Zhong
2020
TSDG: Content-aware Neural Response Generation with Two-stage Decoding Process
Junsheng Kong
|
Zhicheng Zhong
|
Yi Cai
|
Xin Wu
|
Da Ren
Findings of the Association for Computational Linguistics: EMNLP 2020
Neural response generative models have achieved remarkable progress in recent years but tend to yield irrelevant and uninformative responses. One of the reasons is that encoder-decoder based models always use a single decoder to generate a complete response at a stroke. This tends to generate high-frequency function words with less semantic information rather than low-frequency content words with more semantic information. To address this issue, we propose a content-aware model with two-stage decoding process named Two-stage Dialogue Generation (TSDG). We separate the decoding process of content words and function words so that content words can be generated independently without the interference of function words. Experimental results on two datasets indicate that our model significantly outperforms several competitive generative models in terms of automatic and human evaluation.