Monisha Jegadeesan
2021
Improving the Diversity of Unsupervised Paraphrasing with Embedding Outputs
Monisha Jegadeesan
|
Sachin Kumar
|
John Wieting
|
Yulia Tsvetkov
Proceedings of the 1st Workshop on Multilingual Representation Learning
We present a novel technique for zero-shot paraphrase generation. The key contribution is an end-to-end multilingual paraphrasing model that is trained using translated parallel corpora to generate paraphrases into “meaning spaces” – replacing the final softmax layer with word embeddings. This architectural modification, plus a training procedure that incorporates an autoencoding objective, enables effective parameter sharing across languages for more fluent monolingual rewriting, and facilitates fluency and diversity in the generated outputs. Our continuous-output paraphrase generation models outperform zero-shot paraphrasing baselines when evaluated on two languages using a battery of computational metrics as well as in human assessment.