Ritvik Choudhary
2025
Exploring Context Strategies in LLMs for Discourse-Aware Machine Translation
Ritvik Choudhary
|
Rem Hida
|
Masaki Hamada
|
Hayato Futami
|
Toshiyuki Sekiya
Findings of the Association for Computational Linguistics: EMNLP 2025
While large language models (LLMs) excel at machine translation (MT), the impact of how LLMs utilize different forms of contextual information on discourse-level phenomena remains underexplored. We systematically investigate how different forms of context such as prior source sentences, models’ generated hypotheses, and reference translations influence standard MT metrics and specific discourse phenomena (formality, pronoun selection, and lexical cohesion). Evaluating multiple LLMs across multiple domains and language pairs, our findings consistently show that context boosts both translation and discourse-specific performance. Notably, the context strategy of combining source text with the model’s own prior hypotheses effectively improves discourse consistency without gold references, demonstrating effective use of model’s own imperfect generations as diverse contextual cues.
2022
Grounding in social media: An approach to building a chit-chat dialogue model
Ritvik Choudhary
|
Daisuke Kawahara
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop
Building open-domain dialogue systems capable of rich human-like conversational ability is one of the fundamental challenges in language generation. However, even with recent advancements in the field, existing open-domain generative models fail to capture and utilize external knowledge, leading to repetitive or generic responses to unseen utterances. Current work on knowledge-grounded dialogue generation primarily focuses on persona incorporation or searching a fact-based structured knowledge source such as Wikipedia. Our method takes a broader and simpler approach, which aims to improve the raw conversation ability of the system by mimicking the human response behavior through casual interactions found on social media. Utilizing a joint retriever-generator setup, the model queries a large set of filtered comment data from Reddit to act as additional context for the seq2seq generator. Automatic and human evaluations on open-domain dialogue datasets demonstrate the effectiveness of our approach.