2024
pdf
bib
abs
iML at SemEval-2024 Task 2: Safe Biomedical Natural Language Interference for Clinical Trials with LLM Based Ensemble Inferencing
Abbas Akkasi
|
Adnan Khan
|
Mai A. Shaaban
|
Majid Komeili
|
Mohammad Yaqub
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
We engaged in the shared task 2 at SenEval-2024, employing a diverse set of solutions with a particular emphasis on leveraging a Large Language Model (LLM) based zero-shot inference approach to address the challenge.
2023
pdf
bib
abs
Reference-Free Summarization Evaluation with Large Language Models
Abbas Akkasi
|
Kathleen Fraser
|
Majid Komeili
Proceedings of the 4th Workshop on Evaluation and Comparison of NLP Systems
With the continuous advancement in unsupervised learning methodologies, text generation has become increasingly pervasive. However, the evaluation of the quality of the generated text remains challenging. Human annotations are expensive and often show high levels of disagreement, in particular for certain tasks characterized by inherent subjectivity, such as translation and summarization.Consequently, the demand for automated metrics that can reliably assess the quality of such generative systems and their outputs has grown more pronounced than ever. In 2023, Eval4NLP organized a shared task dedicated to the automatic evaluation of outputs from two specific categories of generative systems: machine translation and summarization. This evaluation was achieved through the utilization of prompts with Large Language Models. Participating in the summarization evaluation track, we propose an approach that involves prompting LLMs to evaluate six different latent dimensions of summarization quality. In contrast to many previous approaches to summarization assessments, which emphasize lexical overlap with reference text, this method surfaces the importance of correct syntax in summarization evaluation. Our method resulted in the second-highest performance in this shared task, demonstrating its effectiveness as a reference-free evaluation.
2022
pdf
bib
abs
Multi Perspective Scientific Document Summarization With Graph Attention Networks (GATS)
Abbas Akkasi
Proceedings of the Third Workshop on Scholarly Document Processing
It is well recognized that creating summaries of scientific texts can be difficult. For each given document, the majority of summarizing research believes there is only one best gold summary. Having just one gold summary limits our capacity to assess the effectiveness of summarizing algorithms because creating summaries is an art. Likewise, because it takes subject-matter experts a lot of time to read and comprehend lengthy scientific publications, annotating several gold summaries for scientific documents can be very expensive. The shared task known as the Multi perspective Scientific Document Summarization (Mup) is an exploration of various methods to produce multi perspective scientific summaries. Utilizing Graph Attention Networks (GATs), we take an extractive text summarization approach to the issue as a kind of sentence ranking task. Although the results produced by the suggested model are not particularly impressive, comparing them to the state-of-the-arts demonstrates the model’s potential for improvement.
2018
pdf
bib
abs
TakeLab at SemEval-2018 Task 7: Combining Sparse and Dense Features for Relation Classification in Scientific Texts
Martin Gluhak
|
Maria Pia di Buono
|
Abbas Akkasi
|
Jan Šnajder
Proceedings of the 12th International Workshop on Semantic Evaluation
We describe two systems for semantic relation classification with which we participated in the SemEval 2018 Task 7, subtask 1 on semantic relation classification: an SVM model and a CNN model. Both models combine dense pretrained word2vec features and hancrafted sparse features. For training the models, we combine the two datasets provided for the subtasks in order to balance the under-represented classes. The SVM model performed better than CNN, achieving a F1-macro score of 69.98% on subtask 1.1 and 75.69% on subtask 1.2. The system ranked 7th on among 28 submissions on subtask 1.1 and 7th among 20 submissions on subtask 1.2.