Lost in the Source Language: How Large Language Models Evaluate the Quality of Machine Translation

Xu Huang, Zhirui Zhang, Xiang Geng, Yichao Du, Jiajun Chen, Shujian Huang


Abstract
This study investigates how Large Language Models (LLMs) leverage source and reference data in machine translation evaluation task, aiming to better understand the mechanisms behind their remarkable performance in this task.We design the controlled experiments across various input modes and model types, and employ both coarse-grained and fine-grained prompts to discern the utility of source versus reference information.We find that reference information significantly enhances the evaluation accuracy, while surprisingly, source information sometimes is counterproductive, indicating LLMs’ inability to fully leverage the cross-lingual capability when evaluating translations.Further analysis of the fine-grained evaluation and fine-tuning experiments show similar results.These findings also suggest a potential research direction for LLMs that fully exploits the cross-lingual capability of LLMs to achieve better performance in machine translation evaluation tasks.
Anthology ID:
2024.findings-acl.211
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3546–3562
Language:
URL:
https://aclanthology.org/2024.findings-acl.211
DOI:
10.18653/v1/2024.findings-acl.211
Bibkey:
Cite (ACL):
Xu Huang, Zhirui Zhang, Xiang Geng, Yichao Du, Jiajun Chen, and Shujian Huang. 2024. Lost in the Source Language: How Large Language Models Evaluate the Quality of Machine Translation. In Findings of the Association for Computational Linguistics: ACL 2024, pages 3546–3562, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Lost in the Source Language: How Large Language Models Evaluate the Quality of Machine Translation (Huang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.211.pdf