Polite Dialogue Generation Without Parallel Data

Tong Niu, Mohit Bansal


Abstract
Stylistic dialogue response generation, with valuable applications in personality-based conversational agents, is a challenging task because the response needs to be fluent, contextually-relevant, as well as paralinguistically accurate. Moreover, parallel datasets for regular-to-stylistic pairs are usually unavailable. We present three weakly-supervised models that can generate diverse, polite (or rude) dialogue responses without parallel data. Our late fusion model (Fusion) merges the decoder of an encoder-attention-decoder dialogue model with a language model trained on stand-alone polite utterances. Our label-finetuning (LFT) model prepends to each source sequence a politeness-score scaled label (predicted by our state-of-the-art politeness classifier) during training, and at test time is able to generate polite, neutral, and rude responses by simply scaling the label embedding by the corresponding score. Our reinforcement learning model (Polite-RL) encourages politeness generation by assigning rewards proportional to the politeness classifier score of the sampled response. We also present two retrievalbased, polite dialogue model baselines. Human evaluation validates that while the Fusion and the retrieval-based models achieve politeness with poorer context-relevance, the LFT and Polite-RL models can produce significantly more polite responses without sacrificing dialogue quality.
Anthology ID:
Q18-1027
Volume:
Transactions of the Association for Computational Linguistics, Volume 6
Month:
Year:
2018
Address:
Cambridge, MA
Editors:
Lillian Lee, Mark Johnson, Kristina Toutanova, Brian Roark
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
373–389
Language:
URL:
https://aclanthology.org/Q18-1027
DOI:
10.1162/tacl_a_00027
Bibkey:
Cite (ACL):
Tong Niu and Mohit Bansal. 2018. Polite Dialogue Generation Without Parallel Data. Transactions of the Association for Computational Linguistics, 6:373–389.
Cite (Informal):
Polite Dialogue Generation Without Parallel Data (Niu & Bansal, TACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/Q18-1027.pdf