Quentin Lemesle


2025

Evaluating automatic paraphrase production systems is a difficult task as it involves, among other things, assessing the semantic proximity between two sentences. Usual measures are based on lexical distances, or at least on semantic embedding alignments. The rise of Large Language Models (LLM) has provided tools to model relationships within a text thanks to the attention mechanism. In this article, we introduce ParaPLUIE, a new measure based on a log likelihood ratio from an LLM, to assess the quality of a potential paraphrase. This measure is compared with usual measures on two known by the NLP community datasets prior to this study. Three new small datasets have been built to allow metrics to be compared in different scenario and to avoid data contamination bias. According to evaluations, the proposed measure is better for sorting pairs of sentences by semantic proximity. In particular, it is much more independent to lexical distance and provides an interpretable classification threshold between paraphrases and non-paraphrases.

2024

L’évaluation des systèmes de production automatique de paraphrases est une tâche difficile car elle implique, entre autre, d’évaluer la proximité sémantique entre deux phrases. Les mesures traditionnelles s’appuient sur des distances lexicales, ou au mieux des alignements de plongements sémantiques. Dans cet article nous étudions certaines de ces mesures sur des corpus de paraphrases et de non-paraphrases reconnus pour leurs qualités ou difficultés sur cette tâche. Nous proposons une nouvelle mesure, ParaPLUIE, s’appuyant sur l’utilisation d’un grand modèle de langue. D’après nos expériences, celui-ci est plus à même de trier les paires de phrases par proximité sémantique.