Dialogue Coherence Assessment Without Explicit Dialogue Act Labels

Mohsen Mesgar, Sebastian Bücker, Iryna Gurevych


Abstract
Recent dialogue coherence models use the coherence features designed for monologue texts, e.g. nominal entities, to represent utterances and then explicitly augment them with dialogue-relevant features, e.g., dialogue act labels. It indicates two drawbacks, (a) semantics of utterances are limited to entity mentions, and (b) the performance of coherence models strongly relies on the quality of the input dialogue act labels. We address these issues by introducing a novel approach to dialogue coherence assessment. We use dialogue act prediction as an auxiliary task in a multi-task learning scenario to obtain informative utterance representations for coherence assessment. Our approach alleviates the need for explicit dialogue act labels during evaluation. The results of our experiments show that our model substantially (more than 20 accuracy points) outperforms its strong competitors on the DailyDialogue corpus, and performs on par with them on the SwitchBoard corpus for ranking dialogues concerning their coherence. We release our source code.
Anthology ID:
2020.acl-main.133
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1439–1450
Language:
URL:
https://aclanthology.org/2020.acl-main.133
DOI:
10.18653/v1/2020.acl-main.133
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.133.pdf
Video:
 http://slideslive.com/38929137
Code
 UKPLab/acl2020-dialogue-coherence-assessment