2025
pdf
bib
abs
The Devil is in the Details: Assessing the Effects of Machine-Translation on LLM Performance in Domain-Specific Texts
Javier Osorio
|
Afraa Alshammari
|
Naif Alatrush
|
Dagmar Heintze
|
Amber Converse
|
Sultan Alsarra
|
Latifur Khan
|
Patrick T. Brandt
|
Vito D’Orazio
Proceedings of Machine Translation Summit XX: Volume 1
Conflict scholars increasingly use computational tools to track violence and cooperation at a global scale. To study foreign locations, researchers often use machine translation (MT) tools, but rarely evaluate the quality of the MT output or its effects on Large Language Model (LLM) performance. Using a domain-specific multi-lingual parallel corpus, this study evaluates the quality of several MT tools for text in English, Arabic, and Spanish. Using ConfliBERT, a domain-specific LLM, the study evaluates the effect of MT texts on model performance, and finds that MT texts tend to yield better results than native texts. The MT quality assessment reveals considerable translation-induced distortions, reductions in vocabulary size and text specialization, and changes in syntactical structure. Regression analysis at the sentence-level reveals that such distortions, particularly reductions in general and domain vocabulary rarity, artificially boost LLM performance by simplifying the MT output. This finding cautions researchers and practitioners about uncritically relying on MT tools without considering MT-induced data loss.
pdf
bib
abs
Advancing Active Learning with Ensemble Strategies
Naif Alatrush
|
Sultan Alsarra
|
Afraa Alshammari
|
Luay Abdeljaber
|
Niamat Zawad
|
Latifur Khan
|
Patrick T. Brandt
|
Javier Osorio
|
Vito D’Orazio
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Active learning (AL) reduces annotation costs by selecting the most informative samples for labeling. However, traditional AL methods rely on a single heuristic, limiting data exploration and annotation efficiency. This paper introduces two ensemble-based AL methods: Ensemble Union, which combines multiple heuristics to improve dataset exploration, and Ensemble Intersection, which applies majority voting for robust sample selection. We evaluate these approaches on the United Nations Parallel Corpus (UNPC) in both English and Spanish using domain-specific models such as ConfliBERT. Our results show that ensemble-based AL strategies outperform individual heuristics, achieving classification performance comparable to full dataset training while using significantly fewer labeled examples. Although focused on political texts, the proposed methods are applicable to broader NLP annotation tasks where labeling costs are high.
2024
pdf
bib
abs
Leveraging Codebook Knowledge with NLI and ChatGPT for Zero-Shot Political Relation Classification
Yibo Hu
|
Erick Skorupa Parolin
|
Latifur Khan
|
Patrick Brandt
|
Javier Osorio
|
Vito D’Orazio
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Is it possible accurately classify political relations within evolving event ontologies without extensive annotations? This study investigates zero-shot learning methods that use expert knowledge from existing annotation codebook, and evaluates the performance of advanced ChatGPT (GPT-3.5/4) and a natural language inference (NLI)-based model called ZSP. ChatGPT uses codebook’s labeled summaries as prompts, whereas ZSP breaks down the classification task into context, event mode, and class disambiguation to refine task-specific hypotheses. This decomposition enhances interpretability, efficiency, and adaptability to schema changes. The experiments reveal ChatGPT’s strengths and limitations, and crucially show ZSP’s outperformance of dictionary-based methods and its competitive edge over some supervised models. These findings affirm the value of ZSP for validating event records and advancing ontology development. Our study underscores the efficacy of leveraging transfer learning and existing domain expertise to enhance research efficiency and scalability.
2023
pdf
bib
abs
ConfliBERT-Arabic: A Pre-trained Arabic Language Model for Politics, Conflicts and Violence
Sultan Alsarra
|
Luay Abdeljaber
|
Wooseong Yang
|
Niamat Zawad
|
Latifur Khan
|
Patrick Brandt
|
Javier Osorio
|
Vito D’Orazio
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
This study investigates the use of Natural Language Processing (NLP) methods to analyze politics, conflicts and violence in the Middle East using domain-specific pre-trained language models. We introduce Arabic text and present ConfliBERT-Arabic, a pre-trained language models that can efficiently analyze political, conflict and violence-related texts. Our technique hones a pre-trained model using a corpus of Arabic texts about regional politics and conflicts. Performance of our models is compared to baseline BERT models. Our findings show that the performance of NLP models for Middle Eastern politics and conflict analysis are enhanced by the use of domain-specific pre-trained local language models. This study offers political and conflict analysts, including policymakers, scholars, and practitioners new approaches and tools for deciphering the intricate dynamics of local politics and conflicts directly in Arabic.
2022
pdf
bib
abs
ConfliBERT: A Pre-trained Language Model for Political Conflict and Violence
Yibo Hu
|
MohammadSaleh Hosseini
|
Erick Skorupa Parolin
|
Javier Osorio
|
Latifur Khan
|
Patrick Brandt
|
Vito D’Orazio
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Analyzing conflicts and political violence around the world is a persistent challenge in the political science and policy communities due in large part to the vast volumes of specialized text needed to monitor conflict and violence on a global scale. To help advance research in political science, we introduce ConfliBERT, a domain-specific pre-trained language model for conflict and political violence. We first gather a large domain-specific text corpus for language modeling from various sources. We then build ConfliBERT using two approaches: pre-training from scratch and continual pre-training. To evaluate ConfliBERT, we collect 12 datasets and implement 18 tasks to assess the models’ practical application in conflict research. Finally, we evaluate several versions of ConfliBERT in multiple experiments. Results consistently show that ConfliBERT outperforms BERT when analyzing political violence and conflict.