Neural Conversation Model Controllable by Given Dialogue Act Based on Adversarial Learning and Label-aware Objective

Seiya Kawano, Koichiro Yoshino, Satoshi Nakamura


Abstract
Building a controllable neural conversation model (NCM) is an important task. In this paper, we focus on controlling the responses of NCMs by using dialogue act labels of responses as conditions. We introduce an adversarial learning framework for the task of generating conditional responses with a new objective to a discriminator, which explicitly distinguishes sentences by using labels. This change strongly encourages the generation of label-conditioned sentences. We compared the proposed method with some existing methods for generating conditional responses. The experimental results show that our proposed method has higher controllability for dialogue acts even though it has higher or comparable naturalness to existing methods.
Anthology ID:
W19-8627
Volume:
Proceedings of the 12th International Conference on Natural Language Generation
Month:
October–November
Year:
2019
Address:
Tokyo, Japan
Editors:
Kees van Deemter, Chenghua Lin, Hiroya Takamura
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
198–207
Language:
URL:
https://aclanthology.org/W19-8627
DOI:
10.18653/v1/W19-8627
Bibkey:
Cite (ACL):
Seiya Kawano, Koichiro Yoshino, and Satoshi Nakamura. 2019. Neural Conversation Model Controllable by Given Dialogue Act Based on Adversarial Learning and Label-aware Objective. In Proceedings of the 12th International Conference on Natural Language Generation, pages 198–207, Tokyo, Japan. Association for Computational Linguistics.
Cite (Informal):
Neural Conversation Model Controllable by Given Dialogue Act Based on Adversarial Learning and Label-aware Objective (Kawano et al., INLG 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-8627.pdf