2025
pdf
bib
abs
The Devil is in the Details: Assessing the Effects of Machine-Translation on LLM Performance in Domain-Specific Texts
Javier Osorio
|
Afraa Alshammari
|
Naif Alatrush
|
Dagmar Heintze
|
Amber Converse
|
Sultan Alsarra
|
Latifur Khan
|
Patrick T. Brandt
|
Vito D’Orazio
Proceedings of Machine Translation Summit XX: Volume 1
Conflict scholars increasingly use computational tools to track violence and cooperation at a global scale. To study foreign locations, researchers often use machine translation (MT) tools, but rarely evaluate the quality of the MT output or its effects on Large Language Model (LLM) performance. Using a domain-specific multi-lingual parallel corpus, this study evaluates the quality of several MT tools for text in English, Arabic, and Spanish. Using ConfliBERT, a domain-specific LLM, the study evaluates the effect of MT texts on model performance, and finds that MT texts tend to yield better results than native texts. The MT quality assessment reveals considerable translation-induced distortions, reductions in vocabulary size and text specialization, and changes in syntactical structure. Regression analysis at the sentence-level reveals that such distortions, particularly reductions in general and domain vocabulary rarity, artificially boost LLM performance by simplifying the MT output. This finding cautions researchers and practitioners about uncritically relying on MT tools without considering MT-induced data loss.
pdf
bib
abs
Advancing Active Learning with Ensemble Strategies
Naif Alatrush
|
Sultan Alsarra
|
Afraa Alshammari
|
Luay Abdeljaber
|
Niamat Zawad
|
Latifur Khan
|
Patrick T. Brandt
|
Javier Osorio
|
Vito D’Orazio
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Active learning (AL) reduces annotation costs by selecting the most informative samples for labeling. However, traditional AL methods rely on a single heuristic, limiting data exploration and annotation efficiency. This paper introduces two ensemble-based AL methods: Ensemble Union, which combines multiple heuristics to improve dataset exploration, and Ensemble Intersection, which applies majority voting for robust sample selection. We evaluate these approaches on the United Nations Parallel Corpus (UNPC) in both English and Spanish using domain-specific models such as ConfliBERT. Our results show that ensemble-based AL strategies outperform individual heuristics, achieving classification performance comparable to full dataset training while using significantly fewer labeled examples. Although focused on political texts, the proposed methods are applicable to broader NLP annotation tasks where labeling costs are high.
2023
pdf
bib
abs
ConfliBERT-Arabic: A Pre-trained Arabic Language Model for Politics, Conflicts and Violence
Sultan Alsarra
|
Luay Abdeljaber
|
Wooseong Yang
|
Niamat Zawad
|
Latifur Khan
|
Patrick Brandt
|
Javier Osorio
|
Vito D’Orazio
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
This study investigates the use of Natural Language Processing (NLP) methods to analyze politics, conflicts and violence in the Middle East using domain-specific pre-trained language models. We introduce Arabic text and present ConfliBERT-Arabic, a pre-trained language models that can efficiently analyze political, conflict and violence-related texts. Our technique hones a pre-trained model using a corpus of Arabic texts about regional politics and conflicts. Performance of our models is compared to baseline BERT models. Our findings show that the performance of NLP models for Middle Eastern politics and conflict analysis are enhanced by the use of domain-specific pre-trained local language models. This study offers political and conflict analysts, including policymakers, scholars, and practitioners new approaches and tools for deciphering the intricate dynamics of local politics and conflicts directly in Arabic.