Learning Probabilistic Sentence Representations from Paraphrases

Mingda Chen, Kevin Gimpel


Abstract
Probabilistic word embeddings have shown effectiveness in capturing notions of generality and entailment, but there is very little work on doing the analogous type of investigation for sentences. In this paper we define probabilistic models that produce distributions for sentences. Our best-performing model treats each word as a linear transformation operator applied to a multivariate Gaussian distribution. We train our models on paraphrases and demonstrate that they naturally capture sentence specificity. While our proposed model achieves the best performance overall, we also show that specificity is represented by simpler architectures via the norm of the sentence vectors. Qualitative analysis shows that our probabilistic model captures sentential entailment and provides ways to analyze the specificity and preciseness of individual words.
Anthology ID:
2020.repl4nlp-1.3
Volume:
Proceedings of the 5th Workshop on Representation Learning for NLP
Month:
July
Year:
2020
Address:
Online
Editors:
Spandana Gella, Johannes Welbl, Marek Rei, Fabio Petroni, Patrick Lewis, Emma Strubell, Minjoon Seo, Hannaneh Hajishirzi
Venue:
RepL4NLP
SIG:
SIGREP
Publisher:
Association for Computational Linguistics
Note:
Pages:
17–23
Language:
URL:
https://aclanthology.org/2020.repl4nlp-1.3
DOI:
10.18653/v1/2020.repl4nlp-1.3
Bibkey:
Cite (ACL):
Mingda Chen and Kevin Gimpel. 2020. Learning Probabilistic Sentence Representations from Paraphrases. In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 17–23, Online. Association for Computational Linguistics.
Cite (Informal):
Learning Probabilistic Sentence Representations from Paraphrases (Chen & Gimpel, RepL4NLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.repl4nlp-1.3.pdf
Video:
 http://slideslive.com/38929769
Data
SNLI