Bernard Ghanem
2026
Hala Technical Report Building Arabic-Centric Instruction & Translation Models at Scale
Hasan Abed Al Kader Hammoud | Mohamad Bilal Zbib | Bernard Ghanem
Proceedings of the 2nd Workshop on NLP for Languages Using Arabic Script
Hasan Abed Al Kader Hammoud | Mohamad Bilal Zbib | Bernard Ghanem
Proceedings of the 2nd Workshop on NLP for Languages Using Arabic Script
We present HALA, a family of Arabic-centric instruction and translation models built with our translate-and-tune pipeline. We first compress a strong AR↔EN teacher to FP8 (yielding ~2× higher throughput with no quality loss) and use it to create high-fidelity bilingual supervision. A lightweight language model LFM2–1.2B is then fine-tuned on this data and used to translate high-quality English instruction sets into Arabic, producing a million-scale corpus tailored to instruction following. We train HALA models at 350M, 700M, 1.2B, and 9B parameters, and apply slerp merging to balance Arabic specialization with base-model strengths. On Arabic-centric benchmarks, HALA achieves state-of-the-art results within both the "nano" (≤2B) and "small" (7–9B) categories, outperforming their bases. We are committed to release models, data, evaluation, and recipes to accelerate research in Arabic NLP.
AraLingBench: A Human-Annotated Benchmark for Evaluating Arabic Linguistic Capabilities of Large Language Models
Mohamad Bilal Zbib | Hasan Abed Al Kader Hammoud | Ammar Mohanna | Nadine Rizk | Fatima Karnib | Sina Moukaled | Bernard Ghanem
Proceedings of the 2nd Workshop on NLP for Languages Using Arabic Script
Mohamad Bilal Zbib | Hasan Abed Al Kader Hammoud | Ammar Mohanna | Nadine Rizk | Fatima Karnib | Sina Moukaled | Bernard Ghanem
Proceedings of the 2nd Workshop on NLP for Languages Using Arabic Script
We present AraLingBench, a fully human annotated benchmark for evaluating the Arabic linguistic competence of large language mod- els (LLMs). The benchmark spans five core categories: grammar, morphology, spelling, reading comprehension, and syntax, through 150 expert designed multiple choice questions that directly assess structural language understanding. Evaluating 35 Arabic and bilingual LLMs reveals that current models demonstrate strong surface level proficiency but struggle with deeper grammatical and syntactic reasoning. AraLingBench highlights a persistent gap between high scores on knowledge-based benchmarks and true linguistic mastery, showing that many models succeed through memorization or pattern recognition rather than au- thentic comprehension. By isolating and measuring fundamental linguistic skills, AraLingBench provides a diagnostic framework for developing Arabic LLMs. The benchmark and evaluation code are available on Hugging Face and GitHub.
2025
CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents
Tianqi Xu | Linyao Chen | Dai-Jie Wu | Yanjun Chen | Zecheng Zhang | Xiang Yao | Zhiqiang Xie | Yongchao Chen | Shilong Liu | Bochen Qian | Anjie Yang | Zhaoxuan Jin | Jianbo Deng | Philip Torr | Bernard Ghanem | Guohao Li
Findings of the Association for Computational Linguistics: ACL 2025
Tianqi Xu | Linyao Chen | Dai-Jie Wu | Yanjun Chen | Zecheng Zhang | Xiang Yao | Zhiqiang Xie | Yongchao Chen | Shilong Liu | Bochen Qian | Anjie Yang | Zhaoxuan Jin | Jianbo Deng | Philip Torr | Bernard Ghanem | Guohao Li
Findings of the Association for Computational Linguistics: ACL 2025
The development of autonomous agents increasingly relies on Multimodal Language Models (MLMs) to perform tasks described in natural language with GUI environments, such as websites, desktop computers, or mobile phones. Existing benchmarks for MLM agents in interactive environments are limited by their focus on a single environment, lack of detailed and generalized evaluation methods, and thecomplexities of constructing tasks and evaluators. To overcome these limitations, we introduce CRAB, the first cross-environment agent benchmark framework, incorporating a graph-based fine-grained evaluation method and an efficient task generation method. Our framework supports multiple devices and can be easily extended to any environment with a Python interface. Leveraging CRAB, we develope CRAB Benchmark-v0 comprising 120 tasks in computer desktop and mobile phone environments. We evaluated 6 advanced MLMs using different single and multi-agent system configurations on this benchmark. The experimental results demonstrate that the single agent with GPT-4o achieves the best completion ratio of 38.01%.
MOLE: Metadata Extraction and Validation in Scientific Papers Using LLMs
Zaid Alyafeai | Maged S. Al-shaibani | Bernard Ghanem
Findings of the Association for Computational Linguistics: EMNLP 2025
Zaid Alyafeai | Maged S. Al-shaibani | Bernard Ghanem
Findings of the Association for Computational Linguistics: EMNLP 2025
Metadata extraction is essential for cataloging and preserving datasets, enabling effective research discovery and reproducibility, especially given the current exponential growth in scientific research. While Masader (CITATION) laid the groundwork for extracting a wide range of metadata attributes from Arabic NLP datasets’ scholarly articles, it relies heavily on manual annotation. In this paper, we present MOLE, a framework that leverages Large Language Models (LLMs) to automatically extract metadata attributes from scientific papers covering datasets of languages other than Arabic. Our schema-driven methodology processes entire documents across multiple input formats and incorporates robust validation mechanisms for consistent output. Additionally, we introduce a new benchmark to evaluate the research progress on this task. Through systematic analysis of context length, few-shot learning, and web browsing integration, we demonstrate that modern LLMs show promising results in automating this task, highlighting the need for further future work improvements to ensure consistent and reliable performance.
2024
Model Merging and Safety Alignment: One Bad Model Spoils the Bunch
Hasan Abed Al Kader Hammoud | Umberto Michieli | Fabio Pizzati | Philip Torr | Adel Bibi | Bernard Ghanem | Mete Ozay
Findings of the Association for Computational Linguistics: EMNLP 2024
Hasan Abed Al Kader Hammoud | Umberto Michieli | Fabio Pizzati | Philip Torr | Adel Bibi | Bernard Ghanem | Mete Ozay
Findings of the Association for Computational Linguistics: EMNLP 2024
Merging Large Language Models (LLMs) is a cost-effective technique for combining multiple expert LLMs into a single versatile model, retaining the expertise of the original ones. However, current approaches often overlook the importance of safety alignment during merging, leading to highly misaligned models. This work investigates the effects of model merging on alignment. We evaluate several popular model merging techniques, demonstrating that existing methods do not only transfer domain expertise but also propagate misalignment. We propose a simple two-step approach to address this problem: (i) generating synthetic safety and domain-specific data, and (ii) incorporating these generated data into the optimization process of existing data-aware model merging techniques. This allows us to treat alignment as a skill that can be maximized in the resulting merged LLM. Our experiments illustrate the effectiveness of integrating alignment-related data during merging, resulting in models that excel in both domain expertise and alignment.
2021
Relation-aware Video Reading Comprehension for Temporal Language Grounding
Jialin Gao | Xin Sun | Mengmeng Xu | Xi Zhou | Bernard Ghanem
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Jialin Gao | Xin Sun | Mengmeng Xu | Xi Zhou | Bernard Ghanem
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Temporal language grounding in videos aims to localize the temporal span relevant to the given query sentence. Previous methods treat it either as a boundary regression task or a span extraction task. This paper will formulate temporal language grounding into video reading comprehension and propose a Relation-aware Network (RaNet) to address it. This framework aims to select a video moment choice from the predefined answer set with the aid of coarse-and-fine choice-query interaction and choice-choice relation construction. A choice-query interactor is proposed to match the visual and textual information simultaneously in sentence-moment and token-moment levels, leading to a coarse-and-fine cross-modal interaction. Moreover, a novel multi-choice relation constructor is introduced by leveraging graph convolution to capture the dependencies among video moment choices for the best choice selection. Extensive experiments on ActivityNet-Captions, TACoS, and Charades-STA demonstrate the effectiveness of our solution. Codes will be available at https://github.com/Huntersxsx/RaNet.
Search
Fix author
Co-authors
- Hasan Abed Al Kader Hammoud 3
- Philip Torr 2
- Mohamad Bilal Zbib 2
- Maged S. Al-shaibani 1
- Zaid Alyafeai 1
- Adel Bibi 1
- Linyao Chen 1
- Yanjun Chen 1
- Yongchao Chen 1
- Jianbo Deng 1
- Jialin Gao 1
- Zhaoxuan Jin 1
- Fatima Karnib 1
- Guohao Li 1
- Shilong Liu 1
- Umberto Michieli 1
- Ammar Mohanna 1
- Sina Moukaled 1
- Mete Ozay 1
- Fabio Pizzati 1
- Bochen Qian 1
- Nadine Rizk 1
- Xin Sun 1
- Dai-Jie Wu 1
- Zhiqiang Xie (谢志强) 1
- Mengmeng Xu 1
- Tianqi Xu 1
- Anjie Yang 1
- Xiang Yao 1
- Zecheng Zhang 1
- Xi Zhou 1