Cross-Lingual Learning-to-Rank with Shared Representations

Shota Sasaki, Shuo Sun, Shigehiko Schamoni, Kevin Duh, Kentaro Inui


Abstract
Cross-lingual information retrieval (CLIR) is a document retrieval task where the documents are written in a language different from that of the user’s query. This is a challenging problem for data-driven approaches due to the general lack of labeled training data. We introduce a large-scale dataset derived from Wikipedia to support CLIR research in 25 languages. Further, we present a simple yet effective neural learning-to-rank model that shares representations across languages and reduces the data requirement. This model can exploit training data in, for example, Japanese-English CLIR to improve the results of Swahili-English CLIR.
Anthology ID:
N18-2073
Volume:
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
458–463
Language:
URL:
https://aclanthology.org/N18-2073
DOI:
10.18653/v1/N18-2073
Bibkey:
Cite (ACL):
Shota Sasaki, Shuo Sun, Shigehiko Schamoni, Kevin Duh, and Kentaro Inui. 2018. Cross-Lingual Learning-to-Rank with Shared Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 458–463, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
Cross-Lingual Learning-to-Rank with Shared Representations (Sasaki et al., NAACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/N18-2073.pdf
Data
Large-Scale CLIR Dataset