Dirichlet-Smoothed Word Embeddings for Low-Resource Settings

Jakob Jungmaier, Nora Kassner, Benjamin Roth


Abstract
Nowadays, classical count-based word embeddings using positive pointwise mutual information (PPMI) weighted co-occurrence matrices have been widely superseded by machine-learning-based methods like word2vec and GloVe. But these methods are usually applied using very large amounts of text data. In many cases, however, there is not much text data available, for example for specific domains or low-resource languages. This paper revisits PPMI by adding Dirichlet smoothing to correct its bias towards rare words. We evaluate on standard word similarity data sets and compare to word2vec and the recent state of the art for low-resource settings: Positive and Unlabeled (PU) Learning for word embeddings. The proposed method outperforms PU-Learning for low-resource settings and obtains competitive results for Maltese and Luxembourgish.
Anthology ID:
2020.lrec-1.437
Volume:
Proceedings of the Twelfth Language Resources and Evaluation Conference
Month:
May
Year:
2020
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
3560–3565
Language:
English
URL:
https://aclanthology.org/2020.lrec-1.437
DOI:
Bibkey:
Cite (ACL):
Jakob Jungmaier, Nora Kassner, and Benjamin Roth. 2020. Dirichlet-Smoothed Word Embeddings for Low-Resource Settings. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 3560–3565, Marseille, France. European Language Resources Association.
Cite (Informal):
Dirichlet-Smoothed Word Embeddings for Low-Resource Settings (Jungmaier et al., LREC 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.lrec-1.437.pdf