2024
pdf
bib
abs
An Extensible Massively Multilingual Lexical Simplification Pipeline Dataset using the MultiLS Framework
Matthew Shardlow
|
Fernando Alva-Manchego
|
Riza Batista-Navarro
|
Stefan Bott
|
Saul Calderon Ramirez
|
Rémi Cardon
|
Thomas François
|
Akio Hayakawa
|
Andrea Horbach
|
Anna Hülsing
|
Yusuke Ide
|
Joseph Marvin Imperial
|
Adam Nohejl
|
Kai North
|
Laura Occhipinti
|
Nelson Peréz Rojas
|
Nishat Raihan
|
Tharindu Ranasinghe
|
Martin Solis Salazar
|
Marcos Zampieri
|
Horacio Saggion
Proceedings of the 3rd Workshop on Tools and Resources for People with REAding DIfficulties (READI) @ LREC-COLING 2024
We present preliminary findings on the MultiLS dataset, developed in support of the 2024 Multilingual Lexical Simplification Pipeline (MLSP) Shared Task. This dataset currently comprises of 300 instances of lexical complexity prediction and lexical simplification across 10 languages. In this paper, we (1) describe the annotation protocol in support of the contribution of future datasets and (2) present summary statistics on the existing data that we have gathered. Multilingual lexical simplification can be used to support low-ability readers to engage with otherwise difficult texts in their native, often low-resourced, languages.
pdf
bib
abs
The BEA 2024 Shared Task on the Multilingual Lexical Simplification Pipeline
Matthew Shardlow
|
Fernando Alva-Manchego
|
Riza Batista-Navarro
|
Stefan Bott
|
Saul Calderon Ramirez
|
Rémi Cardon
|
Thomas François
|
Akio Hayakawa
|
Andrea Horbach
|
Anna Hülsing
|
Yusuke Ide
|
Joseph Marvin Imperial
|
Adam Nohejl
|
Kai North
|
Laura Occhipinti
|
Nelson Peréz Rojas
|
Nishat Raihan
|
Tharindu Ranasinghe
|
Martin Solis Salazar
|
Sanja Štajner
|
Marcos Zampieri
|
Horacio Saggion
Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024)
We report the findings of the 2024 Multilingual Lexical Simplification Pipeline shared task. We released a new dataset comprising 5,927 instances of lexical complexity prediction and lexical simplification on common contexts across 10 languages, split into trial (300) and test (5,627). 10 teams participated across 2 tracks and 10 languages with 233 runs evaluated across all systems. Five teams participated in all languages for the lexical complexity prediction task and 4 teams participated in all languages for the lexical simplification task. Teams employed a range of strategies, making use of open and closed source large language models for lexical simplification, as well as feature-based approaches for lexical complexity prediction. The highest scoring team on the combined multilingual data was able to obtain a Pearson’s correlation of 0.6241 and an ACC@1@Top1 of 0.3772, both demonstrating that there is still room for improvement on two difficult sub-tasks of the lexical simplification pipeline.
pdf
bib
abs
EmoMix-3L: A Code-Mixed Dataset for Bangla-English-Hindi for Emotion Detection
Nishat Raihan
|
Dhiman Goswami
|
Antara Mahmud
|
Antonios Anastasopoulos
|
Marcos Zampieri
Proceedings of the 7th Workshop on Indian Language Data: Resources and Evaluation
Code-mixing is a well-studied linguistic phenomenon that occurs when two or more languages are mixed in text or speech. Several studies have been conducted on building datasets and performing downstream NLP tasks on code-mixed data. Although it is not uncommon to observe code-mixing of three or more languages, most available datasets in this domain contain code-mixed data from only two languages. In this paper, we introduce EmoMix-3L, a novel multi-label emotion detection dataset containing code-mixed data from three different languages. We experiment with several models on EmoMix-3L and we report that MuRIL outperforms other models on this dataset.
pdf
bib
abs
MentalHelp: A Multi-Task Dataset for Mental Health in Social Media
Nishat Raihan
|
Sadiya Sayara Chowdhury Puspo
|
Shafkat Farabi
|
Ana-Maria Bucur
|
Tharindu Ranasinghe
|
Marcos Zampieri
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Early detection of mental health disorders is an essential step in treating and preventing mental health conditions. Computational approaches have been applied to users’ social media profiles in an attempt to identify various mental health conditions such as depression, PTSD, schizophrenia, and eating disorders. The interest in this topic has motivated the creation of various depression detection datasets. However, annotating such datasets is expensive and time-consuming, limiting their size and scope. To overcome this limitation, we present MentalHelp, a large-scale semi-supervised mental disorder detection dataset containing 14 million instances. The corpus was collected from Reddit and labeled in a semi-supervised way using an ensemble of three separate models - flan-T5, Disor-BERT, and Mental-BERT.
pdf
bib
abs
MasonTigers at SemEval-2024 Task 9: Solving Puzzles with an Ensemble of Chain-of-Thought Prompts
Nishat Raihan
|
Dhiman Goswami
|
Al Nahian Bin Emran
|
Sadiya Sayara Chowdhury Puspo
|
Amrita Ganguly
|
Marcos Zampieri
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
Our paper presents team MasonTigers submission to the SemEval-2024 Task 9 - which provides a dataset of puzzles for testing natural language understanding. We employ large language models (LLMs) to solve this task through several prompting techniques. Zero-shot and few-shot prompting generate reasonably good results when tested with proprietary LLMs, compared to the open-source models. We obtain further improved results with chain-of-thought prompting, an iterative prompting method that breaks down the reasoning process step-by-step. We obtain our best results by utilizing an ensemble of chain-of-thought prompts, placing 2nd in the word puzzle subtask and 13th in the sentence puzzle subtask. The strong performance of prompted LLMs demonstrates their capability for complex reasoning when provided with a decomposition of the thought process. Our work sheds light on how step-wise explanatory prompts can unlock more of the knowledge encoded in the parameters of large models.
pdf
bib
abs
MasonTigers at SemEval-2024 Task 8: Performance Analysis of Transformer-based Models on Machine-Generated Text Detection
Sadiya Sayara Chowdhury Puspo
|
Nishat Raihan
|
Dhiman Goswami
|
Al Nahian Bin Emran
|
Amrita Ganguly
|
Özlem Uzuner
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
This paper presents the MasonTigers entryto the SemEval-2024 Task 8 - Multigenerator, Multidomain, and Multilingual BlackBox Machine-Generated Text Detection. Thetask encompasses Binary Human-Written vs.Machine-Generated Text Classification (TrackA), Multi-Way Machine-Generated Text Classification (Track B), and Human-Machine MixedText Detection (Track C). Our best performing approaches utilize mainly the ensemble ofdiscriminator transformer models along withsentence transformer and statistical machinelearning approaches in specific cases. Moreover, Zero shot prompting and fine-tuning ofFLAN-T5 are used for Track A and B.
pdf
bib
abs
MasonTigers at SemEval-2024 Task 1: An Ensemble Approach for Semantic Textual Relatedness
Dhiman Goswami
|
Sadiya Sayara Chowdhury Puspo
|
Nishat Raihan
|
Al Nahian Bin Emran
|
Amrita Ganguly
|
Marcos Zampieri
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
This paper presents the MasonTigers’ entry to the SemEval-2024 Task 1 - Semantic Textual Relatedness. The task encompasses supervised (Track A), unsupervised (Track B), and cross-lingual (Track C) approaches to semantic textual relatedness across 14 languages. MasonTigers stands out as one of the two teams who participated in all languages across the three tracks. Our approaches achieved rankings ranging from 11th to 21st in Track A, from 1st to 8th in Track B, and from 5th to 12th in Track C. Adhering to the task-specific constraints, our best performing approaches utilize an ensemble of statistical machine learning approaches combined with language-specific BERT based models and sentence transformers.