A Qualitative Evaluation Framework for Paraphrase Identification

Venelin Kovatchev, M. Antonia Marti, Maria Salamo, Javier Beltran


Abstract
In this paper, we present a new approach for the evaluation, error analysis, and interpretation of supervised and unsupervised Paraphrase Identification (PI) systems. Our evaluation framework makes use of a PI corpus annotated with linguistic phenomena to provide a better understanding and interpretation of the performance of various PI systems. Our approach allows for a qualitative evaluation and comparison of the PI models using human interpretable categories. It does not require modification of the training objective of the systems and does not place additional burden on the developers. We replicate several popular supervised and unsupervised PI systems. Using our evaluation framework we show that: 1) Each system performs differently with respect to a set of linguistic phenomena and makes qualitatively different kinds of errors; 2) Some linguistic phenomena are more challenging than others across all systems.
Anthology ID:
R19-1067
Volume:
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)
Month:
September
Year:
2019
Address:
Varna, Bulgaria
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
568–577
Language:
URL:
https://aclanthology.org/R19-1067
DOI:
10.26615/978-954-452-056-4_067
Bibkey:
Cite (ACL):
Venelin Kovatchev, M. Antonia Marti, Maria Salamo, and Javier Beltran. 2019. A Qualitative Evaluation Framework for Paraphrase Identification. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 568–577, Varna, Bulgaria. INCOMA Ltd..
Cite (Informal):
A Qualitative Evaluation Framework for Paraphrase Identification (Kovatchev et al., RANLP 2019)
Copy Citation:
PDF:
https://aclanthology.org/R19-1067.pdf