Comparing in context: Improving cosine similarity measures with a metric tensor

Isa M. Apallius de Vos, Ghislaine L. van den Boogerd, Mara D. Fennema, Adriana Correia


Abstract
Cosine similarity is a widely used measure of the relatedness of pre-trained word embeddings, trained on a language modeling goal. Datasets such as WordSim-353 and SimLex-999 rate how similar words are according to human annotators, and as such are often used to evaluate the performance of language models. Thus, any improvement on the word similarity task requires an improved word representation. In this paper, we propose instead the use of an extended cosine similarity measure to improve performance on that task, with gains in interpretability. We explore the hypothesis that this approach is particularly useful if the word-similarity pairs share the same context, for which distinct contextualized similarity measures can be learned. We first use the dataset of Richie et al. (2020) to learn contextualized metrics and compare the results with the baseline values obtained using the standard cosine similarity measure, which consistently shows improvement. We also train a contextualized similarity measure for both SimLex-999 and WordSim-353, comparing the results with the corresponding baselines, and using these datasets as independent test sets for the all-context similarity measure learned on the contextualized dataset, obtaining positive results for a number of tests.
Anthology ID:
2021.icon-main.17
Volume:
Proceedings of the 18th International Conference on Natural Language Processing (ICON)
Month:
December
Year:
2021
Address:
National Institute of Technology Silchar, Silchar, India
Editors:
Sivaji Bandyopadhyay, Sobha Lalitha Devi, Pushpak Bhattacharyya
Venue:
ICON
SIG:
Publisher:
NLP Association of India (NLPAI)
Note:
Pages:
128–138
Language:
URL:
https://aclanthology.org/2021.icon-main.17
DOI:
Bibkey:
Cite (ACL):
Isa M. Apallius de Vos, Ghislaine L. van den Boogerd, Mara D. Fennema, and Adriana Correia. 2021. Comparing in context: Improving cosine similarity measures with a metric tensor. In Proceedings of the 18th International Conference on Natural Language Processing (ICON), pages 128–138, National Institute of Technology Silchar, Silchar, India. NLP Association of India (NLPAI).
Cite (Informal):
Comparing in context: Improving cosine similarity measures with a metric tensor (Apallius de Vos et al., ICON 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.icon-main.17.pdf
Optional supplementary material:
 2021.icon-main.17.OptionalSupplementaryMaterial.zip