LETI: Learning to Generate from Textual Interactions

Xingyao Wang, Hao Peng, Reyhaneh Jabbarvand, Heng Ji


Abstract
Fine-tuning pre-trained language models (LMs) is essential for enhancing their capabilities.Existing techniques commonly fine-tune on input-output pairs (e.g., instruction tuning) or with numerical rewards that gauge the output quality (e.g., RLHF). We explore LMs’ potential to **le**arn from **t**extual **i**nteractions (**LETI**) that not only check their correctness with *binary labels* but also pinpoint and explain errors in their outputs through *textual feedback*.Our focus is the code generation task, where the model produces code based on natural language instructions. This setting invites a natural and scalable way to acquire textual feedback: the error messages and stack traces from code execution using a Python interpreter. LETI iteratively fine-tunes the model, using the LM objective, on a concatenation of natural language instructions, LM-generated programs, and textual feedback. Prepended to this fine-tuning text, a binary reward token is used to differentiate correct and buggy solutions.LETI requires *no* ground-truth outputs for training and even outperforms a fine-tuned baseline that does. LETI not only improves the performance of LMs on a code generation dataset MBPP, but also generalizes to other datasets. Trained on MBPP, it achieves comparable or better performance than the base LMs on unseen problems in HumanEval. Furthermore, compared to binary feedback, we observe that textual feedback leads to improved generation quality and sample efficiency, achieving the same performance with fewer than half of the gradient steps.LETI is equally applicable in natural language tasks when they can be formulated as code generation, which we empirically verified on event argument extraction.
Anthology ID:
2024.findings-naacl.16
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
223–239
Language:
URL:
https://aclanthology.org/2024.findings-naacl.16
DOI:
Bibkey:
Cite (ACL):
Xingyao Wang, Hao Peng, Reyhaneh Jabbarvand, and Heng Ji. 2024. LETI: Learning to Generate from Textual Interactions. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 223–239, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
LETI: Learning to Generate from Textual Interactions (Wang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.16.pdf
Copyright:
 2024.findings-naacl.16.copyright.pdf