Masaki Hamada
2025
Exploring Context Strategies in LLMs for Discourse-Aware Machine Translation
Ritvik Choudhary
|
Rem Hida
|
Masaki Hamada
|
Hayato Futami
|
Toshiyuki Sekiya
Findings of the Association for Computational Linguistics: EMNLP 2025
While large language models (LLMs) excel at machine translation (MT), the impact of how LLMs utilize different forms of contextual information on discourse-level phenomena remains underexplored. We systematically investigate how different forms of context such as prior source sentences, models’ generated hypotheses, and reference translations influence standard MT metrics and specific discourse phenomena (formality, pronoun selection, and lexical cohesion). Evaluating multiple LLMs across multiple domains and language pairs, our findings consistently show that context boosts both translation and discourse-specific performance. Notably, the context strategy of combining source text with the model’s own prior hypotheses effectively improves discourse consistency without gold references, demonstrating effective use of model’s own imperfect generations as diverse contextual cues.