2024
pdf
bib
abs
LAraBench: Benchmarking Arabic AI with Large Language Models
Ahmed Abdelali
|
Hamdy Mubarak
|
Shammur Chowdhury
|
Maram Hasanain
|
Basel Mousi
|
Sabri Boughorbel
|
Samir Abdaljalil
|
Yassine El Kheir
|
Daniel Izham
|
Fahim Dalvi
|
Majd Hawasly
|
Nizi Nazar
|
Youssef Elshahawy
|
Ahmed Ali
|
Nadir Durrani
|
Natasa Milic-Frayling
|
Firoj Alam
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Recent advancements in Large Language Models (LLMs) have significantly influenced the landscape of language and speech research. Despite this progress, these models lack specific benchmarking against state-of-the-art (SOTA) models tailored to particular languages and tasks. LAraBench addresses this gap for Arabic Natural Language Processing (NLP) and Speech Processing tasks, including sequence tagging and content classification across different domains. We utilized models such as GPT-3.5-turbo, GPT-4, BLOOMZ, Jais-13b-chat, Whisper, and USM, employing zero and few-shot learning techniques to tackle 33 distinct tasks across 61 publicly available datasets. This involved 98 experimental setups, encompassing ~296K data points, ~46 hours of speech, and 30 sentences for Text-to-Speech (TTS). This effort resulted in 330+ sets of experiments. Our analysis focused on measuring the performance gap between SOTA models and LLMs. The overarching trend observed was that SOTA models generally outperformed LLMs in zero-shot learning, with a few exceptions. Notably, larger computational models with few-shot learning techniques managed to reduce these performance gaps. Our findings provide valuable insights into the applicability of LLMs for Arabic NLP and speech processing tasks.
pdf
bib
abs
LLMeBench: A Flexible Framework for Accelerating LLMs Benchmarking
Fahim Dalvi
|
Maram Hasanain
|
Sabri Boughorbel
|
Basel Mousi
|
Samir Abdaljalil
|
Nizi Nazar
|
Ahmed Abdelali
|
Shammur Absar Chowdhury
|
Hamdy Mubarak
|
Ahmed Ali
|
Majd Hawasly
|
Nadir Durrani
|
Firoj Alam
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations
The recent development and success of Large Language Models (LLMs) necessitate an evaluation of their performance across diverse NLP tasks in different languages. Although several frameworks have been developed and made publicly available, their customization capabilities for specific tasks and datasets are often complex for different users. In this study, we introduce the LLMeBench framework, which can be seamlessly customized to evaluate LLMs for any NLP task, regardless of language. The framework features generic dataset loaders, several model providers, and pre-implements most standard evaluation metrics. It supports in-context learning with zero- and few-shot settings. A specific dataset and task can be evaluated for a given LLM in less than 20 lines of code while allowing full flexibility to extend the framework for custom datasets, models, or tasks. The framework has been tested on 31 unique NLP tasks using 53 publicly available datasets within 90 experimental setups, involving approximately 296K data points. We open-sourced LLMeBench for the community (https://github.com/qcri/LLMeBench/) and a video demonstrating the framework is available online (https://youtu.be/9cC2m_abk3A).