Zero-Shot Cross-Lingual Reranking with Large Language Models for Low-Resource Languages

Mofetoluwa Adeyemi, Akintunde Oladipo, Ronak Pradeep, Jimmy Lin


Abstract
Large language models (LLMs) as listwise rerankers have shown impressive zero-shot capabilities in various passage ranking tasks. Despite their success, there is still a gap in existing literature on their effectiveness in reranking low-resource languages. To address this, we investigate how LLMs function as listwise rerankers in cross-lingual information retrieval (CLIR) systems with queries in English and passages in four African languages: Hausa, Somali, Swahili, and Yoruba. We analyze and compare the effectiveness of monolingual reranking using either query or document translations. We also evaluate the effectiveness of LLMs when leveraging their own generated translations. To grasp the general picture, we examine the effectiveness of multiple LLMs — the proprietary models RankGPT-4 and RankGPT-3.5, along with the open-source model RankZephyr. While the document translation setting, i.e., both queries and documents are in English, leads to the best reranking effectiveness, our results indicate that for specific LLMs, reranking in the African language setting achieves competitive effectiveness with the cross-lingual setting, and even performs better when using the LLM’s own translations.
Anthology ID:
2024.acl-short.59
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
650–656
Language:
URL:
https://aclanthology.org/2024.acl-short.59
DOI:
10.18653/v1/2024.acl-short.59
Bibkey:
Cite (ACL):
Mofetoluwa Adeyemi, Akintunde Oladipo, Ronak Pradeep, and Jimmy Lin. 2024. Zero-Shot Cross-Lingual Reranking with Large Language Models for Low-Resource Languages. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650–656, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Zero-Shot Cross-Lingual Reranking with Large Language Models for Low-Resource Languages (Adeyemi et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-short.59.pdf