Correct Metadata for
Abstract
This paper presents our submission for the Stance Detection in Arabic Language (StanceEval) 2024 shared task conducted by Team SMASH of the University of Edinburgh. We evaluated the performance of various BERT-based and large language models (LLMs). MARBERT demonstrates superior performance among the BERT-based models, achieving F1 and macro-F1 scores of 0.570 and 0.770, respectively. In contrast, Command R model outperforms all models with the highest overall F1 score of 0.661 and macro F1 score of 0.820.- Anthology ID:
- 2024.arabicnlp-1.92
- Volume:
- Proceedings of the Second Arabic Natural Language Processing Conference
- Month:
- August
- Year:
- 2024
- Address:
- Bangkok, Thailand
- Editors:
- Nizar Habash, Houda Bouamor, Ramy Eskander, Nadi Tomeh, Ibrahim Abu Farha, Ahmed Abdelali, Samia Touileb, Injy Hamed, Yaser Onaizan, Bashar Alhafni, Wissam Antoun, Salam Khalifa, Hatem Haddad, Imed Zitouni, Badr AlKhamissi, Rawan Almatham, Khalil Mrini
- Venues:
- ArabicNLP | WS
- SIG:
- SIGARAB
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 800–806
- Language:
- URL:
- https://aclanthology.org/2024.arabicnlp-1.92/
- DOI:
- 10.18653/v1/2024.arabicnlp-1.92
- Bibkey:
- Cite (ACL):
- Youssef Al Hariri and Ibrahim Abu Farha. 2024. SMASH at StanceEval 2024: Prompt Engineering LLMs for Arabic Stance Detection. In Proceedings of the Second Arabic Natural Language Processing Conference, pages 800–806, Bangkok, Thailand. Association for Computational Linguistics.
- Cite (Informal):
- SMASH at StanceEval 2024: Prompt Engineering LLMs for Arabic Stance Detection (Al Hariri & Abu Farha, ArabicNLP 2024)
- Copy Citation:
- PDF:
- https://aclanthology.org/2024.arabicnlp-1.92.pdf
Export citation
@inproceedings{hariri-abu-farha-2024-smash-stanceeval,
title = "{SMASH} at {S}tance{E}val 2024: Prompt Engineering {LLM}s for {A}rabic Stance Detection",
author = "Al Hariri, Youssef and
Abu Farha, Ibrahim",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of the Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.92/",
doi = "10.18653/v1/2024.arabicnlp-1.92",
pages = "800--806",
abstract = "This paper presents our submission for the Stance Detection in Arabic Language (StanceEval) 2024 shared task conducted by Team SMASH of the University of Edinburgh. We evaluated the performance of various BERT-based and large language models (LLMs). MARBERT demonstrates superior performance among the BERT-based models, achieving F1 and macro-F1 scores of 0.570 and 0.770, respectively. In contrast, Command R model outperforms all models with the highest overall F1 score of 0.661 and macro F1 score of 0.820."
}<?xml version="1.0" encoding="UTF-8"?>
<modsCollection xmlns="http://www.loc.gov/mods/v3">
<mods ID="hariri-abu-farha-2024-smash-stanceeval">
<titleInfo>
<title>SMASH at StanceEval 2024: Prompt Engineering LLMs for Arabic Stance Detection</title>
</titleInfo>
<name type="personal">
<namePart type="given">Youssef</namePart>
<namePart type="family">Al Hariri</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Ibrahim</namePart>
<namePart type="family">Abu Farha</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<originInfo>
<dateIssued>2024-08</dateIssued>
</originInfo>
<typeOfResource>text</typeOfResource>
<relatedItem type="host">
<titleInfo>
<title>Proceedings of the Second Arabic Natural Language Processing Conference</title>
</titleInfo>
<name type="personal">
<namePart type="given">Nizar</namePart>
<namePart type="family">Habash</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Houda</namePart>
<namePart type="family">Bouamor</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Ramy</namePart>
<namePart type="family">Eskander</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Nadi</namePart>
<namePart type="family">Tomeh</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Ibrahim</namePart>
<namePart type="family">Abu Farha</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Ahmed</namePart>
<namePart type="family">Abdelali</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Samia</namePart>
<namePart type="family">Touileb</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Injy</namePart>
<namePart type="family">Hamed</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Yaser</namePart>
<namePart type="family">Onaizan</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Bashar</namePart>
<namePart type="family">Alhafni</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Wissam</namePart>
<namePart type="family">Antoun</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Salam</namePart>
<namePart type="family">Khalifa</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Hatem</namePart>
<namePart type="family">Haddad</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Imed</namePart>
<namePart type="family">Zitouni</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Badr</namePart>
<namePart type="family">AlKhamissi</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Rawan</namePart>
<namePart type="family">Almatham</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Khalil</namePart>
<namePart type="family">Mrini</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<originInfo>
<publisher>Association for Computational Linguistics</publisher>
<place>
<placeTerm type="text">Bangkok, Thailand</placeTerm>
</place>
</originInfo>
<genre authority="marcgt">conference publication</genre>
</relatedItem>
<abstract>This paper presents our submission for the Stance Detection in Arabic Language (StanceEval) 2024 shared task conducted by Team SMASH of the University of Edinburgh. We evaluated the performance of various BERT-based and large language models (LLMs). MARBERT demonstrates superior performance among the BERT-based models, achieving F1 and macro-F1 scores of 0.570 and 0.770, respectively. In contrast, Command R model outperforms all models with the highest overall F1 score of 0.661 and macro F1 score of 0.820.</abstract>
<identifier type="citekey">hariri-abu-farha-2024-smash-stanceeval</identifier>
<identifier type="doi">10.18653/v1/2024.arabicnlp-1.92</identifier>
<location>
<url>https://aclanthology.org/2024.arabicnlp-1.92/</url>
</location>
<part>
<date>2024-08</date>
<extent unit="page">
<start>800</start>
<end>806</end>
</extent>
</part>
</mods>
</modsCollection>
%0 Conference Proceedings %T SMASH at StanceEval 2024: Prompt Engineering LLMs for Arabic Stance Detection %A Al Hariri, Youssef %A Abu Farha, Ibrahim %Y Habash, Nizar %Y Bouamor, Houda %Y Eskander, Ramy %Y Tomeh, Nadi %Y Abu Farha, Ibrahim %Y Abdelali, Ahmed %Y Touileb, Samia %Y Hamed, Injy %Y Onaizan, Yaser %Y Alhafni, Bashar %Y Antoun, Wissam %Y Khalifa, Salam %Y Haddad, Hatem %Y Zitouni, Imed %Y AlKhamissi, Badr %Y Almatham, Rawan %Y Mrini, Khalil %S Proceedings of the Second Arabic Natural Language Processing Conference %D 2024 %8 August %I Association for Computational Linguistics %C Bangkok, Thailand %F hariri-abu-farha-2024-smash-stanceeval %X This paper presents our submission for the Stance Detection in Arabic Language (StanceEval) 2024 shared task conducted by Team SMASH of the University of Edinburgh. We evaluated the performance of various BERT-based and large language models (LLMs). MARBERT demonstrates superior performance among the BERT-based models, achieving F1 and macro-F1 scores of 0.570 and 0.770, respectively. In contrast, Command R model outperforms all models with the highest overall F1 score of 0.661 and macro F1 score of 0.820. %R 10.18653/v1/2024.arabicnlp-1.92 %U https://aclanthology.org/2024.arabicnlp-1.92/ %U https://doi.org/10.18653/v1/2024.arabicnlp-1.92 %P 800-806
Markdown (Informal)
[SMASH at StanceEval 2024: Prompt Engineering LLMs for Arabic Stance Detection](https://aclanthology.org/2024.arabicnlp-1.92/) (Al Hariri & Abu Farha, ArabicNLP 2024)
- SMASH at StanceEval 2024: Prompt Engineering LLMs for Arabic Stance Detection (Al Hariri & Abu Farha, ArabicNLP 2024)
ACL
- Youssef Al Hariri and Ibrahim Abu Farha. 2024. SMASH at StanceEval 2024: Prompt Engineering LLMs for Arabic Stance Detection. In Proceedings of the Second Arabic Natural Language Processing Conference, pages 800–806, Bangkok, Thailand. Association for Computational Linguistics.