PiVe: Prompting with Iterative Verification Improving Graph-based Generative Capability of LLMs

Jiuzhou Han, Nigel Collier, Wray Buntine, Ehsan Shareghi


Abstract
Large language models (LLMs) have shown great abilities of solving various natural language tasks in different domains. Due to the training objective of LLMs and their pre-training data, LLMs are not very well equipped for tasks involving structured data generation. We propose a framework, Prompting with Iterative Verification (PiVe), to improve graph-based generative capability of LLMs. We show how a small language model could be trained to act as a verifier module for the output of an LLM(i.e., ChatGPT, GPT-4), and to iteratively improve its performance via fine-grained corrective instructions. We also show how the verifier module could apply iterative corrections offline for a more cost-effective solution to the text-to-graph generation task. Experiments on three graph-based datasets show consistent improvement gained via PiVe. Additionally, we create GenWiki-HIQ and highlight that the verifier module can be used as a data augmentation tool to help improve the quality of automatically generated parallel text-graph datasets.
Anthology ID:
2024.findings-acl.400
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6702–6718
Language:
URL:
https://aclanthology.org/2024.findings-acl.400
DOI:
Bibkey:
Cite (ACL):
Jiuzhou Han, Nigel Collier, Wray Buntine, and Ehsan Shareghi. 2024. PiVe: Prompting with Iterative Verification Improving Graph-based Generative Capability of LLMs. In Findings of the Association for Computational Linguistics ACL 2024, pages 6702–6718, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
PiVe: Prompting with Iterative Verification Improving Graph-based Generative Capability of LLMs (Han et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.400.pdf