%0 Conference Proceedings %T Problems with Cosine as a Measure of Embedding Similarity for High Frequency Words %A Zhou, Kaitlyn %A Ethayarajh, Kawin %A Card, Dallas %A Jurafsky, Dan %Y Muresan, Smaranda %Y Nakov, Preslav %Y Villavicencio, Aline %S Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) %D 2022 %8 May %I Association for Computational Linguistics %C Dublin, Ireland %F zhou-etal-2022-problems %X Cosine similarity of contextual embeddings is used in many NLP tasks (e.g., QA, IR, MT) and metrics (e.g., BERTScore). Here, we uncover systematic ways in which word similarities estimated by cosine over BERT embeddings are understated and trace this effect to training data frequency. We find that relative to human judgements, cosine similarity underestimates the similarity of frequent words with other instances of the same word or other words across contexts, even after controlling for polysemy and other factors. We conjecture that this underestimation of similarity for high frequency words is due to differences in the representational geometry of high and low frequency words and provide a formal argument for the two-dimensional case. %R 10.18653/v1/2022.acl-short.45 %U https://aclanthology.org/2022.acl-short.45 %U https://doi.org/10.18653/v1/2022.acl-short.45 %P 401-423