%0 Conference Proceedings %T Just Rank: Rethinking Evaluation with Word and Sentence Similarities %A Wang, Bin %A Kuo, C.-C. Jay %A Li, Haizhou %Y Muresan, Smaranda %Y Nakov, Preslav %Y Villavicencio, Aline %S Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) %D 2022 %8 May %I Association for Computational Linguistics %C Dublin, Ireland %F wang-etal-2022-just %X Word and sentence embeddings are useful feature representations in natural language processing. However, intrinsic evaluation for embeddings lags far behind, and there has been no significant update since the past decade. Word and sentence similarity tasks have become the de facto evaluation method. It leads models to overfit to such evaluations, negatively impacting embedding models’ development. This paper first points out the problems using semantic similarity as the gold standard for word and sentence embedding evaluations. Further, we propose a new intrinsic evaluation method called EvalRank, which shows a much stronger correlation with downstream tasks. Extensive experiments are conducted based on 60+ models and popular datasets to certify our judgments. Finally, the practical evaluation toolkit is released for future benchmarking purposes. %R 10.18653/v1/2022.acl-long.419 %U https://aclanthology.org/2022.acl-long.419 %U https://doi.org/10.18653/v1/2022.acl-long.419 %P 6060-6077