Validating Predictive Models Of Evaluative Language For Controllable Data2Text Generation

Maurice Langner, Ralf Klabunde


Abstract
In data2text generation, tabular data is transformed into a text that expresses information from that source domain. While some text types, such as instructions, demand objective and neutral language without any expressive and evaluative content, many other text types are expected to provide expressions for these kinds of subjective meanings. In controllable, pipelined neural NLG separate learning models, notably regression models, can be used to predict whether some feature deviates sufficiently strongly from an expected value, so that evaluative language would be appropriate for verbalizing this finding. In this paper, we present an empirical study on the comprehension of evaluative adverbs and adjectival modifiers in car reviews, a text type that is characterized by a mixture of factual information with evaluations expressing positive or negative surprise. We show to what extend regression-based decision boundaries for producing evaluative content in controllable data2text NLG match the reader’s expectations that are raised by those evaluative markers. Finally we show that regression values in combination with standard deviation of the technical input data constitute reasonable Boolean thresholds for both positive and negative surprise, which provide the basis for the development of more complex models that also include the scalar base of adverbs and modifiers.
Anthology ID:
2023.inlg-main.22
Volume:
Proceedings of the 16th International Natural Language Generation Conference
Month:
September
Year:
2023
Address:
Prague, Czechia
Editors:
C. Maria Keet, Hung-Yi Lee, Sina Zarrieß
Venues:
INLG | SIGDIAL
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
313–322
Language:
URL:
https://aclanthology.org/2023.inlg-main.22
DOI:
10.18653/v1/2023.inlg-main.22
Bibkey:
Cite (ACL):
Maurice Langner and Ralf Klabunde. 2023. Validating Predictive Models Of Evaluative Language For Controllable Data2Text Generation. In Proceedings of the 16th International Natural Language Generation Conference, pages 313–322, Prague, Czechia. Association for Computational Linguistics.
Cite (Informal):
Validating Predictive Models Of Evaluative Language For Controllable Data2Text Generation (Langner & Klabunde, INLG-SIGDIAL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.inlg-main.22.pdf
Supplementary attachment:
 2023.inlg-main.22.Supplementary_Attachment.zip