Enhancing Translation Accuracy and Consistency through Large Language Models

Mei Chai Zheng


Abstract
Recent advancements in neural machine translation (NMT) have significantly improved the accuracy of translation from one language to another. However, challenges such as adherence to translation memories, context-specific terminologies, and consistent formality register remain pervasive hurdles. This presentation explores the integration of Large Language Models (LLMs) into the MT pipeline to address these specific issues, demonstrating substantial improvements in translation quality and contextual appropriateness.
Anthology ID:
2024.amta-presentations.3
Volume:
Proceedings of the 16th Conference of the Association for Machine Translation in the Americas (Volume 2: Presentations)
Month:
September
Year:
2024
Address:
Chicago, USA
Editors:
Marianna Martindale, Janice Campbell, Konstantin Savenkov, Shivali Goel
Venue:
AMTA
SIG:
Publisher:
Association for Machine Translation in the Americas
Note:
Pages:
19–29
Language:
URL:
https://aclanthology.org/2024.amta-presentations.3
DOI:
Bibkey:
Cite (ACL):
Mei Chai Zheng. 2024. Enhancing Translation Accuracy and Consistency through Large Language Models. In Proceedings of the 16th Conference of the Association for Machine Translation in the Americas (Volume 2: Presentations), pages 19–29, Chicago, USA. Association for Machine Translation in the Americas.
Cite (Informal):
Enhancing Translation Accuracy and Consistency through Large Language Models (Zheng, AMTA 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.amta-presentations.3.pdf