Kuang-Da Wang
2025
Extending Automatic Machine Translation Evaluation to Book-Length Documents
Kuang-Da Wang
|
Shuoyang Ding
|
Chao-Han Huck Yang
|
Ping-Chun Hsieh
|
Wen-Chih Peng
|
Vitaly Lavrukhin
|
Boris Ginsburg
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Despite Large Language Models (LLMs) demonstrating superior translation performance and long-context capabilities, evaluation methodologies remain constrained to sentence-level assessment due to dataset limitations, token number restrictions in metrics, and rigid sentence boundary requirements. We introduce SEGALE, an evaluation scheme that extends existing automatic metrics to long-document translation by treating documents as continuous text and applying sentence segmentation and alignment methods. Our approach enables previously unattainable document-level evaluation, handling translations of arbitrary length generated with document-level prompts while accounting for under-/over-translations and varied sentence boundaries. Experiments show our scheme significantly outperforms existing long-form document evaluation schemes, while being comparable to evaluations performed with groundtruth sentence alignments. Additionally, we apply our scheme to book-length texts and newly demonstrate that many open-weight LLMs fail to effectively translate documents at their reported maximum context lengths.
Nvidia-Nemo’s WMT 2025 Metrics Shared Task Submission
Brian Yan
|
Shuoyang Ding
|
Kuang-Da Wang
|
Siqi Ouyang
|
Oleksii Hrinchuk
|
Vitaly Lavrukhin
|
Boris Ginsburg
Proceedings of the Tenth Conference on Machine Translation
This paper describes Nvidia-Nemo’s WMT 2025 Metrics Shared Task submission. We investigated two strategies for extending Machine Translation (MT) evaluation to unsegmented documents: 1) first segmenting into sentences and then applying regression-based metrics and 2) directly utilizing the long-context capabilities of LLMs. The base comparison of the segmentation-based and LLM-based metrics on the WMT 2023-24 evaluation sets indicated that the former performs more robustly across language pairs.Thus we sought to improve the LLM-based approach by incorporating relative evaluation - this setting jointly evaluates all candidate translations at once and relative to each other, rather than evaluating each separately. Our experiments using the open-source Qwen3 LLM show that relative evaluation improves score correlations with human judgment, but only if the task is structured as a 2-stage evaluate-then-refine problem.
Search
Fix author
Co-authors
- Shuoyang Ding 2
- Boris Ginsburg 2
- Vitaly Lavrukhin 2
- Oleksii Hrinchuk 1
- Ping-Chun Hsieh 1
- show all...