Pointwise Paraphrase Appraisal is Potentially Problematic

Hannah Chen, Yangfeng Ji, David Evans


Abstract
The prevailing approach for training and evaluating paraphrase identification models is constructed as a binary classification problem: the model is given a pair of sentences, and is judged by how accurately it classifies pairs as either paraphrases or non-paraphrases. This pointwise-based evaluation method does not match well the objective of most real world applications, so the goal of our work is to understand how models which perform well under pointwise evaluation may fail in practice and find better methods for evaluating paraphrase identification models. As a first step towards that goal, we show that although the standard way of fine-tuning BERT for paraphrase identification by pairing two sentences as one sequence results in a model with state-of-the-art performance, that model may perform poorly on simple tasks like identifying pairs with two identical sentences. Moreover, we show that these models may even predict a pair of randomly-selected sentences with higher paraphrase score than a pair of identical ones.
Anthology ID:
2020.acl-srw.20
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
Month:
July
Year:
2020
Address:
Online
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
150–155
Language:
URL:
https://aclanthology.org/2020.acl-srw.20
DOI:
10.18653/v1/2020.acl-srw.20
Bibkey:
Cite (ACL):
Hannah Chen, Yangfeng Ji, and David Evans. 2020. Pointwise Paraphrase Appraisal is Potentially Problematic. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 150–155, Online. Association for Computational Linguistics.
Cite (Informal):
Pointwise Paraphrase Appraisal is Potentially Problematic (Chen et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-srw.20.pdf
Video:
 http://slideslive.com/38928658
Data
MRPC