Responsive and Self-Expressive Dialogue Generation

Kozo Chikai, Junya Takayama, Yuki Arase


Abstract
A neural conversation model is a promising approach to develop dialogue systems with the ability of chit-chat. It allows training a model in an end-to-end manner without complex rule design nor feature engineering. However, as a side effect, the neural model tends to generate safe but uninformative and insensitive responses like “OK” and “I don’t know.” Such replies are called generic responses and regarded as a critical problem for user-engagement of dialogue systems. For a more engaging chit-chat experience, we propose a neural conversation model that generates responsive and self-expressive replies. Specifically, our model generates domain-aware and sentiment-rich responses. Experiments empirically confirmed that our model outperformed the sequence-to-sequence model; 68.1% of our responses were domain-aware with sentiment polarities, which was only 2.7% for responses generated by the sequence-to-sequence model.
Anthology ID:
W19-4116
Volume:
Proceedings of the First Workshop on NLP for Conversational AI
Month:
August
Year:
2019
Address:
Florence, Italy
Editors:
Yun-Nung Chen, Tania Bedrax-Weiss, Dilek Hakkani-Tur, Anuj Kumar, Mike Lewis, Thang-Minh Luong, Pei-Hao Su, Tsung-Hsien Wen
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
139–149
Language:
URL:
https://aclanthology.org/W19-4116
DOI:
10.18653/v1/W19-4116
Bibkey:
Cite (ACL):
Kozo Chikai, Junya Takayama, and Yuki Arase. 2019. Responsive and Self-Expressive Dialogue Generation. In Proceedings of the First Workshop on NLP for Conversational AI, pages 139–149, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Responsive and Self-Expressive Dialogue Generation (Chikai et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-4116.pdf
Code
 KChikai/Responsive-Dialogue-Generation