Naturalness Evaluation of Natural Language Generation in Task-oriented Dialogues Using BERT

Ye Liu, Wolfgang Maier, Wolfgang Minker, Stefan Ultes


Abstract
This paper presents an automatic method to evaluate the naturalness of natural language generation in dialogue systems. While this task was previously rendered through expensive and time-consuming human labor, we present this novel task of automatic naturalness evaluation of generated language. By fine-tuning the BERT model, our proposed naturalness evaluation method shows robust results and outperforms the baselines: support vector machines, bi-directional LSTMs, and BLEURT. In addition, the training speed and evaluation performance of naturalness model are improved by transfer learning from quality and informativeness linguistic knowledge.
Anthology ID:
2021.ranlp-1.96
Original:
2021.ranlp-1.96v1
Version 2:
2021.ranlp-1.96v2
Volume:
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)
Month:
September
Year:
2021
Address:
Held Online
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
839–845
Language:
URL:
https://aclanthology.org/2021.ranlp-1.96
DOI:
Bibkey:
Cite (ACL):
Ye Liu, Wolfgang Maier, Wolfgang Minker, and Stefan Ultes. 2021. Naturalness Evaluation of Natural Language Generation in Task-oriented Dialogues Using BERT. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 839–845, Held Online. INCOMA Ltd..
Cite (Informal):
Naturalness Evaluation of Natural Language Generation in Task-oriented Dialogues Using BERT (Liu et al., RANLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.ranlp-1.96.pdf