%0 Conference Proceedings %T Understanding the Source of Semantic Regularities in Word Embeddings %A Chiang, Hsiao-Yu %A Camacho-Collados, Jose %A Pardos, Zachary %Y Fernández, Raquel %Y Linzen, Tal %S Proceedings of the 24th Conference on Computational Natural Language Learning %D 2020 %8 November %I Association for Computational Linguistics %C Online %F chiang-etal-2020-understanding %X Semantic relations are core to how humans understand and express concepts in the real world using language. Recently, there has been a thread of research aimed at modeling these relations by learning vector representations from text corpora. Most of these approaches focus strictly on leveraging the co-occurrences of relationship word pairs within sentences. In this paper, we investigate the hypothesis that examples of a lexical relation in a corpus are fundamental to a neural word embedding’s ability to complete analogies involving the relation. Our experiments, in which we remove all known examples of a relation from training corpora, show only marginal degradation in analogy completion performance involving the removed relation. This finding enhances our understanding of neural word embeddings, showing that co-occurrence information of a particular semantic relation is not the main source of their structural regularity. %R 10.18653/v1/2020.conll-1.9 %U https://aclanthology.org/2020.conll-1.9 %U https://doi.org/10.18653/v1/2020.conll-1.9 %P 119-131