You Are What You Train: Effects of Data Composition on Training Context-aware Machine Translation Models

Pawel Maka, Yusuf Can Semerci, Jan Scholtes, Gerasimos Spanakis


Abstract
Achieving human-level translations requires leveraging context to ensure coherence and handle complex phenomena like pronoun disambiguation. Sparsity of contextually rich examples in the standard training data has been hypothesized as the reason for the difficulty of context utilization. In this work, we systematically validate this claim in both single- and multilingual settings by constructing training datasets with a controlled proportions of contextually relevant examples. We demonstrate a strong association between training data sparsity and model performance confirming sparsity as a key bottleneck. Importantly, we reveal that improvements in one contextual phenomenon do no generalize to others. While we observe some cross-lingual transfer, it is not significantly higher between languages within the same sub-family. Finally, we propose and empirically evaluate two training strategies designed to leverage the available data. These strategies improve context utilization, resulting in accuracy gains of up to 6 and 8 percentage points on the ctxPro evaluation in single- and multilingual settings respectively.
Anthology ID:
2025.emnlp-main.1394
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
27402–27425
Language:
URL:
https://aclanthology.org/2025.emnlp-main.1394/
DOI:
Bibkey:
Cite (ACL):
Pawel Maka, Yusuf Can Semerci, Jan Scholtes, and Gerasimos Spanakis. 2025. You Are What You Train: Effects of Data Composition on Training Context-aware Machine Translation Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 27402–27425, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
You Are What You Train: Effects of Data Composition on Training Context-aware Machine Translation Models (Maka et al., EMNLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.emnlp-main.1394.pdf
Checklist:
 2025.emnlp-main.1394.checklist.pdf