ASPIRO: Any-shot Structured Parsing-error-Induced ReprOmpting for Consistent Data-to-Text Generation

Martin Vejvar, Yasutaka Fujimoto


Abstract
We present ASPIRO, an approach for structured data verbalisation into short template sentences in zero to few-shot settings. Unlike previous methods, our approach prompts Large Language Models (LLMs) to directly produce entity-agnostic templates, rather than relying on LLMs to faithfully copy the given example entities, or validating/crafting the templates manually. We incorporate LLM re-prompting, triggered by algorithmic parsing checks, as well as the PARENT metric induced consistency validation to identify and rectify template generation problems in real-time. ASPIRO, compared to direct LLM output, averages 66% parsing error rate reduction in generated verbalisations of RDF triples on the DART dataset. Our best 5-shot text-davinci-003 setup, scoring BLEU of 50.62, METEOR of 45.16, BLEURT of 0.82, NUBIA of 0.87, and PARENT of 0.8962 on the Rel2Text dataset, competes effectively with recent fine-tuned pretrained language models.
Anthology ID:
2023.findings-emnlp.229
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3550–3563
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.229
DOI:
10.18653/v1/2023.findings-emnlp.229
Bibkey:
Cite (ACL):
Martin Vejvar and Yasutaka Fujimoto. 2023. ASPIRO: Any-shot Structured Parsing-error-Induced ReprOmpting for Consistent Data-to-Text Generation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 3550–3563, Singapore. Association for Computational Linguistics.
Cite (Informal):
ASPIRO: Any-shot Structured Parsing-error-Induced ReprOmpting for Consistent Data-to-Text Generation (Vejvar & Fujimoto, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.229.pdf