Justin Lee
2025
Command R7B Arabic: a small, enterprise-focused, multilingual, and culturally aware Arabic LLM
Yazeed Alnumay
|
Alexandre Barbet
|
Anna Bialas
|
William Darling
|
Shaan Desai
|
Joan Devassy
|
Kyle Duffy
|
Stephanie Howe
|
Olivia Lasche
|
Justin Lee
|
Anirudh Shrinivason
|
Jennifer Tracey
Proceedings of the Sixth Workshop on African Natural Language Processing (AfricaNLP 2025)
Building high-quality large language models (LLMs) for enterprise Arabic applications remains challenging due to the limited availability of digitized Arabic data. In this work, we present a data synthesis and refinement strategy to help address this problem, namely, by leveraging synthetic data generation and human-in-the-loop annotation to expand our Arabic training corpus. We further present our iterative post training recipe that is essential to achieving state-of-the-art performance in aligning the model with human preferences, a critical aspect to enterprise use cases. The culmination of this effort is the release of a small, 7B, open-weight model that outperforms similarly sized peers in head-to-head comparisons and on Arabic-focused benchmarks covering cultural knowledge, instruction following, RAG, and contextual faithfulness.
2024
Methods, Applications, and Directions of Learning-to-Rank in NLP Research
Justin Lee
|
Gabriel Bernier-Colborne
|
Tegan Maharaj
|
Sowmya Vajjala
Findings of the Association for Computational Linguistics: NAACL 2024
Learning-to-rank (LTR) algorithms aim to order a set of items according to some criteria. They are at the core of applications such as web search and social media recommendations, and are an area of rapidly increasing interest, with the rise of large language models (LLMs) and the widespread impact of these technologies on society. In this paper, we survey the diverse use cases of LTR methods in natural language processing (NLP) research, looking at previously under-studied aspects such as multilingualism in LTR applications and statistical significance testing for LTR problems. We also consider how large language models are changing the LTR landscape. This survey is aimed at NLP researchers and practitioners interested in understanding the formalisms and best practices regarding the application of LTR approaches in their research.
2022
A Neural Pairwise Ranking Model for Readability Assessment
Justin Lee
|
Sowmya Vajjala
Findings of the Association for Computational Linguistics: ACL 2022
Automatic Readability Assessment (ARA), the task of assigning a reading level to a text, is traditionally treated as a classification problem in NLP research. In this paper, we propose the first neural, pairwise ranking approach to ARA and compare it with existing classification, regression, and (non-neural) ranking methods. We establish the performance of our approach by conducting experiments with three English, one French and one Spanish datasets. We demonstrate that our approach performs well in monolingual single/cross corpus testing scenarios and achieves a zero-shot cross-lingual ranking accuracy of over 80% for both French and Spanish when trained on English data. Additionally, we also release a new parallel bilingual readability dataset, that could be useful for future research. To our knowledge, this paper proposes the first neural pairwise ranking model for ARA, and shows the first results of cross-lingual, zero-shot evaluation of ARA with neural models.