An Empirical Study of In-context Learning in LLMs for Machine Translation

Pranjal Chitale, Jay Gala, Raj Dabre


Abstract
Recent interest has surged in employing Large Language Models (LLMs) for machine translation (MT) via in-context learning (ICL) (Vilar et al., 2023). Most prior studies primarily focus on optimizing translation quality, with limited attention to understanding the specific aspects of ICL that influence the said quality. To this end, we perform the first of its kind, exhaustive study of in-context learning for machine translation (MT). We first establish that ICL is primarily example-driven and not instruction-driven. Following this, we conduct an extensive exploration of various aspects of the examples to understand their influence on downstream performance. Our analysis includes factors such as quality and quantity of demonstrations, spatial proximity, and source versus target originality. Further, we also investigate challenging scenarios involving indirectness and misalignment of examples to understand the limits of ICL. While we establish the significance of the quality of the target distribution over the source distribution of demonstrations, we further observe that perturbations sometimes act as regularizers, resulting in performance improvements. Surprisingly, ICL does not necessitate examples from the same task, and a related task with the same target distribution proves sufficient. We hope that our study acts as a guiding resource for considerations in utilizing ICL for MT. Our code is available on https://github.com/PranjalChitale/in-context-mt-analysis.
Anthology ID:
2024.findings-acl.440
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7384–7406
Language:
URL:
https://aclanthology.org/2024.findings-acl.440
DOI:
10.18653/v1/2024.findings-acl.440
Bibkey:
Cite (ACL):
Pranjal Chitale, Jay Gala, and Raj Dabre. 2024. An Empirical Study of In-context Learning in LLMs for Machine Translation. In Findings of the Association for Computational Linguistics: ACL 2024, pages 7384–7406, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
An Empirical Study of In-context Learning in LLMs for Machine Translation (Chitale et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.440.pdf