Estimating word co-occurrence probabilities from pretrained static embeddings using a log-bilinear model

Richard Futrell


Abstract
We investigate how to use pretrained static word embeddings to deliver improved estimates of bilexical co-occurrence probabilities: conditional probabilities of one word given a single other word in a specific relationship. Such probabilities play important roles in psycholinguistics, corpus linguistics, and usage-based cognitive modeling of language more generally. We propose a log-bilinear model taking pretrained vector representations of the two words as input, enabling generalization based on the distributional information contained in both vectors. We show that this model outperforms baselines in estimating probabilities of adjectives given nouns that they attributively modify, and probabilities of nominal direct objects given their head verbs, given limited training data in Arabic, English, Korean, and Spanish.
Anthology ID:
2022.cmcl-1.6
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Emmanuele Chersoni, Nora Hollenstein, Cassandra Jacobs, Yohei Oseki, Laurent Prévot, Enrico Santus
Venue:
CMCL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
54–60
Language:
URL:
https://aclanthology.org/2022.cmcl-1.6
DOI:
10.18653/v1/2022.cmcl-1.6
Bibkey:
Cite (ACL):
Richard Futrell. 2022. Estimating word co-occurrence probabilities from pretrained static embeddings using a log-bilinear model. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 54–60, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Estimating word co-occurrence probabilities from pretrained static embeddings using a log-bilinear model (Futrell, CMCL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.cmcl-1.6.pdf
Video:
 https://aclanthology.org/2022.cmcl-1.6.mp4