2024
pdf
bib
abs
Beyond Probabilities: Unveiling the Misalignment in Evaluating Large Language Models
Chenyang Lyu
|
Minghao Wu
|
Alham Aji
Proceedings of the 1st Workshop on Towards Knowledgeable Language Models (KnowLLM 2024)
Large Language Models (LLMs) have demonstrated remarkable capabilities across various applications, fundamentally reshaping the landscape of natural language processing (NLP) research. However, recent evaluation frameworks often rely on the output probabilities of LLMs for predictions, primarily due to computational constraints, diverging from real-world LLM usage scenarios. While widely employed, the efficacy of these probability-based evaluation strategies remains an open research question. This study aims to scrutinize the validity of such probability-based evaluation methods within the context of using LLMs for Multiple Choice Questions (MCQs), highlighting their inherent limitations. Our empirical investigation reveals that the prevalent probability-based evaluation method inadequately aligns with generation-based prediction. Furthermore, current evaluation frameworks typically assess LLMs through predictive tasks based on output probabilities rather than directly generating responses, owing to computational limitations. We illustrate that these probability-based approaches do not effectively correspond with generative predictions. The outcomes of our study can enhance the understanding of LLM evaluation methodologies and provide insights for future research in this domain.
pdf
bib
abs
SemRel2024: A Collection of Semantic Textual Relatedness Datasets for 13 Languages
Nedjma Ousidhoum
|
Shamsuddeen Muhammad
|
Mohamed Abdalla
|
Idris Abdulmumin
|
Ibrahim Ahmad
|
Sanchit Ahuja
|
Alham Aji
|
Vladimir Araujo
|
Abinew Ayele
|
Pavan Baswani
|
Meriem Beloucif
|
Chris Biemann
|
Sofia Bourhim
|
Christine Kock
|
Genet Dekebo
|
Oumaima Hourrane
|
Gopichand Kanumolu
|
Lokesh Madasu
|
Samuel Rutunda
|
Manish Shrivastava
|
Thamar Solorio
|
Nirmal Surange
|
Hailegnaw Tilaye
|
Krishnapriya Vishnubhotla
|
Genta Winata
|
Seid Yimam
|
Saif Mohammad
Findings of the Association for Computational Linguistics: ACL 2024
Exploring and quantifying semantic relatedness is central to representing language and holds significant implications across various NLP tasks. While earlier NLP research primarily focused on semantic similarity, often within the English language context, we instead investigate the broader phenomenon of semantic relatedness. In this paper, we present SemRel, a new semantic relatedness dataset collection annotated by native speakers across 13 languages: Afrikaans, Algerian Arabic, Amharic, English, Hausa, Hindi, Indonesian, Kinyarwanda, Marathi, Moroccan Arabic, Modern Standard Arabic, Spanish, and Telugu. These languages originate from five distinct language families and are predominantly spoken in Africa and Asia – regions characterised by a relatively limited availability of NLP resources. Each instance in the SemRel datasets is a sentence pair associated with a score that represents the degree of semantic textual relatedness between the two sentences. The scores are obtained using a comparative annotation framework. We describe the data collection and annotation processes, challenges when building the datasets, baseline experiments, and their impact and utility in NLP.
pdf
bib
abs
COPAL-ID: Indonesian Language Reasoning with Local Culture and Nuances
Haryo Wibowo
|
Erland Fuadi
|
Made Nityasya
|
Radityo Eko Prasojo
|
Alham Aji
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
We present COPAL-ID, a novel, public Indonesian language common sense reasoning dataset. Unlike the previous Indonesian COPA dataset (XCOPA-ID), COPAL-ID incorporates Indonesian local and cultural nuances, and therefore, provides a more natural portrayal of day-to-day causal reasoning within the Indonesian cultural sphere. Professionally written by natives from scratch, COPAL-ID is more fluent and free from awkward phrases, unlike the translated XCOPA-ID. In addition, we present COPALID in both standard Indonesian and in Jakartan Indonesian–a dialect commonly used in daily conversation. COPAL-ID poses a greater challenge for existing open-sourced and closedstate-of-the-art multilingual language models, yet is trivially easy for humans. Our findings suggest that general multilingual models struggle to perform well, achieving 66.91% accuracy on COPAL-ID. South-East Asian-specific models achieve slightly better performance of 73.88% accuracy. Yet, this number still falls short of near-perfect human performance. This shows that these language models are still way behind in comprehending the local nuances of Indonesian.
pdf
bib
abs
Daisy at WASSA 2024 Empathy and Personality Shared Task: A Quick Exploration on Emotional Pattern of Empathy and Distress
Rendi Chevi
|
Alham Aji
Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis
When we encountered upsetting or tragic situations involving other people, we might feel certain emotions that are congruent, though not necessarily identical, to what that person might went through. These kind of vicarious emotions are what defined empathy and distress, they can be seen as a form of emotional response to other people in need. In this paper, we describe our participation in WASSA 2024 Shared Task 3 in predicting writer’s level of empathy and distress from their personal essays. We approach this task by assuming one’s level of empathy and distress can be revealed from the emotional patterns within their essay. By extracting the emotional patterns from essays via an emotion classifier, we regress the empathy and distress levels from these patterns. Through correlation and model explainability analysis, we found that there are similar set of emotions, such as sadness or disappointment, and distinct set of emotions, such as anger or approval, that might describe the writer’s level of empathy and distress. We hope that our approach and findings could serve as a basis for future work that try to model and explain empathy and distress from emotional patterns.
pdf
bib
abs
M4GT-Bench: Evaluation Benchmark for Black-Box Machine-Generated Text Detection
Yuxia Wang
|
Jonibek Mansurov
|
Petar Ivanov
|
Jinyan Su
|
Artem Shelmanov
|
Akim Tsvigun
|
Osama Mohammed Afzal
|
Tarek Mahmoud
|
Giovanni Puccetti
|
Thomas Arnold
|
Alham Aji
|
Nizar Habash
|
Iryna Gurevych
|
Preslav Nakov
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
The advent of Large Language Models (LLMs) has brought an unprecedented surge in machine-generated text (MGT) across diverse channels. This raises legitimate concerns about its potential misuse and societal implications. The need to identify and differentiate such content from genuine human-generated text is critical in combating disinformation, preserving the integrity of education and scientific fields, and maintaining trust in communication. In this work, we address this problem by introducing a new benchmark based on a multilingual, multi-domain and multi-generator corpus of MGTs — M4GT-Bench. The benchmark is compiled of three tasks: (1) mono-lingual and multi-lingual binary MGT detection; (2) multi-way detection where one need to identify, which particular model generated the text; and (3) mixed human-machine text detection, where a word boundary delimiting MGT from human-written content should be determined. On the developed benchmark, we have tested several MGT detection baselines and also conducted an evaluation of human performance. We see that obtaining good performance in MGT detection usually requires an access to the training data from the same domain and generators. The benchmark is available at https://github.com/mbzuai-nlp/M4GT-Bench.
pdf
bib
abs
Cendol: Open Instruction-tuned Generative Large Language Models for Indonesian Languages
Samuel Cahyawijaya
|
Holy Lovenia
|
Fajri Koto
|
Rifki Putri
|
Wawan Cenggoro
|
Jhonson Lee
|
Salsabil Akbar
|
Emmanuel Dave
|
Nuurshadieq Nuurshadieq
|
Muhammad Mahendra
|
Rr Putri
|
Bryan Wilie
|
Genta Winata
|
Alham Aji
|
Ayu Purwarianti
|
Pascale Fung
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) show remarkable human-like capability in various domains and languages. To bridge this quality gap, we introduce Cendol, a collection of Indonesian LLMs encompassing both decoder-only and encoder-decoder architectures across a range of model sizes. We highlight Cendol’s effectiveness across a diverse array of tasks, attaining ~20% improvement, and demonstrate its capability to generalize to unseen tasks and indigenous languages of Indonesia. Furthermore, Cendol models showcase improved human favorability despite their limitations in capturing indigenous knowledge and cultural values in Indonesia. In addition, we discuss the shortcomings of parameter-efficient tunings, such as LoRA, for language adaptation. Alternatively, we propose the usage of vocabulary adaptation to enhance efficiency. Lastly, we evaluate the safety of Cendol and showcase that safety in pre-training in one language such as English is transferable to low-resource languages, such as Indonesian, even without RLHF and safety fine-tuning.