Improving the Diversity of Unsupervised Paraphrasing with Embedding Outputs

Monisha Jegadeesan, Sachin Kumar, John Wieting, Yulia Tsvetkov


Abstract
We present a novel technique for zero-shot paraphrase generation. The key contribution is an end-to-end multilingual paraphrasing model that is trained using translated parallel corpora to generate paraphrases into “meaning spaces” – replacing the final softmax layer with word embeddings. This architectural modification, plus a training procedure that incorporates an autoencoding objective, enables effective parameter sharing across languages for more fluent monolingual rewriting, and facilitates fluency and diversity in the generated outputs. Our continuous-output paraphrase generation models outperform zero-shot paraphrasing baselines when evaluated on two languages using a battery of computational metrics as well as in human assessment.
Anthology ID:
2021.mrl-1.15
Volume:
Proceedings of the 1st Workshop on Multilingual Representation Learning
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Duygu Ataman, Alexandra Birch, Alexis Conneau, Orhan Firat, Sebastian Ruder, Gozde Gul Sahin
Venue:
MRL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
166–175
Language:
URL:
https://aclanthology.org/2021.mrl-1.15
DOI:
10.18653/v1/2021.mrl-1.15
Bibkey:
Cite (ACL):
Monisha Jegadeesan, Sachin Kumar, John Wieting, and Yulia Tsvetkov. 2021. Improving the Diversity of Unsupervised Paraphrasing with Embedding Outputs. In Proceedings of the 1st Workshop on Multilingual Representation Learning, pages 166–175, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Improving the Diversity of Unsupervised Paraphrasing with Embedding Outputs (Jegadeesan et al., MRL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.mrl-1.15.pdf
Code
 monisha-jega/paraphrasing_embedding_outputs