Evaluating Large Language Models for In-Context Learning of Linguistic Patterns In Unseen Low Resource Languages

Hongpu Zhu, Yuqi Liang, Wenjing Xu, Hongzhi Xu


Abstract
This paper investigates the ability of Large language Models (LLMs) in capturing linguistic patterns from unseen languages and applying them to translation between the languages and English within an in-context learning framework. Inspired by the International Linguistics Olympiad (IOL), we create test data consisting of translation puzzles between 40 low resource languages and English. We test the LLMs in two different strategies: direct prompting and step-by-step prompting. In the latter, the puzzles are manually decomposed into intermediate steps to allow LLMs learn and apply linguistic rules incrementally. The results show that this strategy can significantly improve the performance of LLMs, achieving comparable or slightly superior results to humans when translating the unseen languages to English. However, LLMs still struggle with translating English into the unseen languages, typically with complex syntactic rules. We further observe that LLMs cannot deal with languages with object-subject and noun-adjective word order compared to others, reflecting the potential impact imposed by typological features of languages in training data.
Anthology ID:
2025.loreslm-1.31
Volume:
Proceedings of the First Workshop on Language Models for Low-Resource Languages
Month:
January
Year:
2025
Address:
Abu Dhabi, United Arab Emirates
Editors:
Hansi Hettiarachchi, Tharindu Ranasinghe, Paul Rayson, Ruslan Mitkov, Mohamed Gaber, Damith Premasiri, Fiona Anting Tan, Lasitha Uyangodage
Venues:
LoResLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
414–426
Language:
URL:
https://aclanthology.org/2025.loreslm-1.31/
DOI:
Bibkey:
Cite (ACL):
Hongpu Zhu, Yuqi Liang, Wenjing Xu, and Hongzhi Xu. 2025. Evaluating Large Language Models for In-Context Learning of Linguistic Patterns In Unseen Low Resource Languages. In Proceedings of the First Workshop on Language Models for Low-Resource Languages, pages 414–426, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Evaluating Large Language Models for In-Context Learning of Linguistic Patterns In Unseen Low Resource Languages (Zhu et al., LoResLM 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.loreslm-1.31.pdf