PARAPHRASUS: A Comprehensive Benchmark for Evaluating Paraphrase Detection Models

Andrianos Michail, Simon Clematide, Juri Opitz


Abstract
The task of determining whether two texts are paraphrases has long been a challenge in NLP. However, the prevailing notion of paraphrase is often quite simplistic, offering only a limited view of the vast spectrum of paraphrase phenomena. Indeed, we find that evaluating models in a paraphrase dataset can leave uncertainty about their true semantic understanding. To alleviate this, we create PARAPHRASUS, a benchmark designed for multi-dimensional assessment, benchmarking and selection of paraphrase detection models. We find that paraphrase detection models under our fine-grained evaluation lens exhibit trade-offs that cannot be captured through a single classification dataset. Furthermore, PARAPHRASUS allows prompt calibration for different use cases, tailoring LLM models to specific strictness levels. PARAPHRASUS includes 3 challenges spanning over 10 datasets, including 8 repurposed and 2 newly annotated; we release it along with a benchmarking library at https://github.com/impresso/paraphrasus
Anthology ID:
2025.coling-main.585
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8749–8762
Language:
URL:
https://aclanthology.org/2025.coling-main.585/
DOI:
Bibkey:
Cite (ACL):
Andrianos Michail, Simon Clematide, and Juri Opitz. 2025. PARAPHRASUS: A Comprehensive Benchmark for Evaluating Paraphrase Detection Models. In Proceedings of the 31st International Conference on Computational Linguistics, pages 8749–8762, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
PARAPHRASUS: A Comprehensive Benchmark for Evaluating Paraphrase Detection Models (Michail et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.585.pdf