Paraphrasing with Large Language Models

Sam Witteveen, Martin Andrews


Abstract
Recently, large language models such as GPT-2 have shown themselves to be extremely adept at text generation and have also been able to achieve high-quality results in many downstream NLP tasks such as text classification, sentiment analysis and question answering with the aid of fine-tuning. We present a useful technique for using a large language model to perform the task of paraphrasing on a variety of texts and subjects. Our approach is demonstrated to be capable of generating paraphrases not only at a sentence level but also for longer spans of text such as paragraphs without needing to break the text into smaller chunks.
Anthology ID:
D19-5623
Volume:
Proceedings of the 3rd Workshop on Neural Generation and Translation
Month:
November
Year:
2019
Address:
Hong Kong
Editors:
Alexandra Birch, Andrew Finch, Hiroaki Hayashi, Ioannis Konstas, Thang Luong, Graham Neubig, Yusuke Oda, Katsuhito Sudoh
Venue:
NGT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
215–220
Language:
URL:
https://aclanthology.org/D19-5623
DOI:
10.18653/v1/D19-5623
Bibkey:
Cite (ACL):
Sam Witteveen and Martin Andrews. 2019. Paraphrasing with Large Language Models. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 215–220, Hong Kong. Association for Computational Linguistics.
Cite (Informal):
Paraphrasing with Large Language Models (Witteveen & Andrews, NGT 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-5623.pdf
Attachment:
 D19-5623.Attachment.pdf
Data
WebText