Beyond Reference: Evaluating High Quality Translations Better than Human References

Keonwoong Noh, Seokjin Oh, Woohwan Jung


Abstract
In Machine Translation (MT) evaluations, the conventional approach is to compare a translated sentence against its human-created reference sentence. MT metrics provide an absolute score (e.g., from 0 to 1) to a candidate sentence based on the similarity with the reference sentence. Thus, existing MT metrics give the maximum score to the reference sentence. However, this approach overlooks the potential for a candidate sentence to exceed the reference sentence in terms of quality. In particular, recent advancements in Large Language Models (LLMs) have highlighted this issue, as LLM-generated sentences often exceed the quality of human-written sentences. To address the problem, we introduce the Residual score Metric (ResuMe), which evaluates the relative quality between reference and candidate sentences. ResuMe assigns a positive score to candidate sentences that outperform their reference sentences, and a negative score when they fall short. By adding the residual scores from ResuMe to the absolute scores from MT metrics, it can be possible to allocate higher scores to candidate sentences than what reference sentences are received from MT metrics. Experimental results demonstrate that ResuMe enhances the alignments between MT metrics and human judgments both at the segment-level and the system-level.
Anthology ID:
2024.emnlp-main.294
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5111–5127
Language:
URL:
https://aclanthology.org/2024.emnlp-main.294
DOI:
Bibkey:
Cite (ACL):
Keonwoong Noh, Seokjin Oh, and Woohwan Jung. 2024. Beyond Reference: Evaluating High Quality Translations Better than Human References. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 5111–5127, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Beyond Reference: Evaluating High Quality Translations Better than Human References (Noh et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.294.pdf