Optimising LLM-Driven Machine Translation with Context-Aware Sliding Windows

Xinye Yang, Yida Mu, Kalina Bontcheva, Xingyi Song


Abstract
This paper describes SheffieldGATE’s submission to WMT 2024 Chat Shared Translation Task. We participate in three language pairs: English-German, English-Dutch, and English-Portuguese (Brazil). In this work, we introduce a context-aware sliding window decoding method to track dependencies between chat messages. We fine-tune a large pre-trained language model based on the training data provided by the shared task Our experiments (i) compare the model performance between multilingual and bilingual fine-tuning and (ii) assess the impact of different window sizes. Our experimental results demonstrate that utilising contextual information yields superior performance in document-level translation compared to translating documents as isolated text segments, and that models fine-tuned with multilingual data perform better than those fine-tuned with bilingual data.
Anthology ID:
2024.wmt-1.101
Volume:
Proceedings of the Ninth Conference on Machine Translation
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Barry Haddow, Tom Kocmi, Philipp Koehn, Christof Monz
Venue:
WMT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1004–1010
Language:
URL:
https://aclanthology.org/2024.wmt-1.101
DOI:
Bibkey:
Cite (ACL):
Xinye Yang, Yida Mu, Kalina Bontcheva, and Xingyi Song. 2024. Optimising LLM-Driven Machine Translation with Context-Aware Sliding Windows. In Proceedings of the Ninth Conference on Machine Translation, pages 1004–1010, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Optimising LLM-Driven Machine Translation with Context-Aware Sliding Windows (Yang et al., WMT 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.wmt-1.101.pdf