TransLLaMa: LLM-based Simultaneous Translation System

Roman Koshkin, Katsuhito Sudoh, Satoshi Nakamura


Abstract
Decoder-only large language models (LLMs) have recently demonstrated impressive capabilities in text generation and reasoning. Nonetheless, they have limited applications in simultaneous machine translation (SiMT), currently dominated by encoder-decoder transformers. This study demonstrates that, after fine-tuning on a small dataset comprising causally aligned source and target sentence pairs, a pre-trained open-source LLM can control input segmentation directly by generating a special “wait” token. This obviates the need for a separate policy and enables the LLM to perform English-German and English-Russian SiMT tasks with BLEU scores that are comparable to those of specific state-of-the-art baselines. We also evaluated closed-source models such as GPT-4, which displayed encouraging results in performing the SiMT task without prior training (zero-shot), indicating a promising avenue for enhancing future SiMT systems.
Anthology ID:
2024.findings-emnlp.27
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
461–476
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.27
DOI:
Bibkey:
Cite (ACL):
Roman Koshkin, Katsuhito Sudoh, and Satoshi Nakamura. 2024. TransLLaMa: LLM-based Simultaneous Translation System. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 461–476, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
TransLLaMa: LLM-based Simultaneous Translation System (Koshkin et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.27.pdf