Improving Cross-Domain Low-Resource Text Generation through LLM Post-Editing: A Programmer-Interpreter Approach

Zhuang Li, Levon Haroutunian, Raj Tumuluri, Philip Cohen, Reza Haf


Abstract
Post-editing has proven effective in improving the quality of text generated by large language models (LLMs) such as GPT-3.5 or GPT-4, particularly when direct updating of their parameters to enhance text quality is infeasible or expensive. However, relying solely on smaller language models for post-editing can limit the LLMs’ ability to generalize across domains. Moreover, the editing strategies in these methods are not optimally designed for text generation tasks. To address these limitations, we propose a neural programmer-interpreter approach that preserves the domain generalization ability of LLMs while editing their output. The editing actions in this framework are specifically devised for text generation. Extensive experiments demonstrate that the programmer-interpreter significantly enhances GPT-3.5’s performance in logical form-to-text conversion and low-resource machine translation, surpassing other state-of-the-art (SOTA) LLM post-editing methods in cross-domain settings.
Anthology ID:
2024.findings-eacl.24
Volume:
Findings of the Association for Computational Linguistics: EACL 2024
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
347–354
Language:
URL:
https://aclanthology.org/2024.findings-eacl.24
DOI:
Bibkey:
Cite (ACL):
Zhuang Li, Levon Haroutunian, Raj Tumuluri, Philip Cohen, and Reza Haf. 2024. Improving Cross-Domain Low-Resource Text Generation through LLM Post-Editing: A Programmer-Interpreter Approach. In Findings of the Association for Computational Linguistics: EACL 2024, pages 347–354, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Improving Cross-Domain Low-Resource Text Generation through LLM Post-Editing: A Programmer-Interpreter Approach (Li et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-eacl.24.pdf