Reducing Sequence Length by Predicting Edit Spans with Large Language Models

Masahiro Kaneko, Naoaki Okazaki


Abstract
Large Language Models (LLMs) have demonstrated remarkable performance in various tasks and gained significant attention. LLMs are also used for local sequence transduction tasks, including grammatical error correction (GEC) and formality style transfer, where most tokens in a source text are kept unchanged. However, the models that generate all target tokens in such tasks have a tendency to simply copy the input text as is, without making needed changes, because the difference between input and output texts is minimal in the training data. This is also inefficient because the computational cost grows quadratically with the target sequence length with Transformer. This paper proposes predicting edit spans for the source text for local sequence transduction tasks. Representing an edit span with a position of the source text and corrected tokens, we can reduce the length of the target sequence and the computational cost for inference. We apply instruction tuning for LLMs on the supervision data of edit spans. Experiments show that the proposed method achieves comparable performance to the baseline in four tasks, paraphrasing, formality style transfer, GEC, and text simplification, despite reducing the length of the target text by as small as 21%. Furthermore, we report that the task-specific fine-tuning with the proposed method achieved state-of-the-art performance in the four tasks.
Anthology ID:
2023.emnlp-main.619
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10017–10029
Language:
URL:
https://aclanthology.org/2023.emnlp-main.619
DOI:
10.18653/v1/2023.emnlp-main.619
Bibkey:
Cite (ACL):
Masahiro Kaneko and Naoaki Okazaki. 2023. Reducing Sequence Length by Predicting Edit Spans with Large Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10017–10029, Singapore. Association for Computational Linguistics.
Cite (Informal):
Reducing Sequence Length by Predicting Edit Spans with Large Language Models (Kaneko & Okazaki, EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.619.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.619.mp4