Improving Summarization with Human Edits

Zonghai Yao, Benjamin Schloss, Sai Selvaraj


Abstract
Recent work has shown the promise of learning with human feedback paradigms to produce human-determined high-quality text. Existing works use human feedback to train large language models (LLMs) in general domain abstractive summarization and have obtained summary quality exceeding traditional likelihood training. In this paper, we focus on a less explored form of human feedback – Human Edits. We propose Sequence Alignment (un)Likelihood Training (SALT), a novel technique to use both the human-edited and model-generated data together in the training loop. In addition, we demonstrate simulating Human Edits with ground truth summaries coming from existing training data – Imitation edits, along with the model-generated summaries obtained after the training, to reduce the need for expensive human-edit data. In our experiments, we extend human feedback exploration from general domain summarization to medical domain summarization. Our results demonstrate the effectiveness of SALT in improving the summary quality with Human and Imitation Edits. Through additional experiments, we show that SALT outperforms the conventional RLHF method (designed for human preferences) – DPO, when applied to human-edit data. We hope the evidence in our paper prompts researchers to explore, collect, and better use different human feedback approaches scalably.
Anthology ID:
2023.emnlp-main.158
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2604–2620
Language:
URL:
https://aclanthology.org/2023.emnlp-main.158
DOI:
10.18653/v1/2023.emnlp-main.158
Bibkey:
Cite (ACL):
Zonghai Yao, Benjamin Schloss, and Sai Selvaraj. 2023. Improving Summarization with Human Edits. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2604–2620, Singapore. Association for Computational Linguistics.
Cite (Informal):
Improving Summarization with Human Edits (Yao et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.158.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.158.mp4