Evaluating Generative Models for Graph-to-Text Generation

Shuzhou Yuan, Michael Faerber


Abstract
Large language models (LLMs) have been widely employed for graph-to-text generation tasks. However, the process of finetuning LLMs requires significant training resources and annotation work. In this paper, we explore the capability of generative models to generate descriptive text from graph data in a zero-shot setting. Specifically, we evaluate GPT-3 and ChatGPT on two graph-to-text datasets and compare their performance with that of finetuned LLM models such as T5 and BART. Our results demonstrate that generative models are capable of generating fluent and coherent text, achieving BLEU scores of 10.57 and 11.08 for the AGENDA and WebNLG datasets, respectively. However, our error analysis reveals that generative models still struggle with understanding the semantic relations between entities, and they also tend to generate text with hallucinations or irrelevant information. As a part of error analysis, we utilize BERT to detect machine-generated text and achieve high macro-F1 scores. We have made the text generated by generative models publicly available.
Anthology ID:
2023.ranlp-1.133
Volume:
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
Month:
September
Year:
2023
Address:
Varna, Bulgaria
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
1256–1264
Language:
URL:
https://aclanthology.org/2023.ranlp-1.133
DOI:
Bibkey:
Cite (ACL):
Shuzhou Yuan and Michael Faerber. 2023. Evaluating Generative Models for Graph-to-Text Generation. In Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing, pages 1256–1264, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
Evaluating Generative Models for Graph-to-Text Generation (Yuan & Faerber, RANLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.ranlp-1.133.pdf