Nakul Verma


2023

pdf bib
Contrastive Loss is All You Need to Recover Analogies as Parallel Lines
Narutatsu Ri | Fei-Tzin Lee | Nakul Verma
Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)

While static word embedding models are known to represent linguistic analogies as parallel lines in high-dimensional space, the underlying mechanism as to why they result in such geometric structures remains obscure. We find that an elementary contrastive-style method employed over distributional information performs competitively with popular word embedding models on analogy recovery tasks, while achieving dramatic speedups in training time. Further, we demonstrate that a contrastive loss is sufficient to create these parallel structures in word embeddings, and establish a precise relationship between the co-occurrence statistics and the geometric structure of the resulting word embeddings.