niksss at SemEval-2022 Task 6: Are Traditionally Pre-Trained Contextual Embeddings Enough for Detecting Intended Sarcasm ?

Nikhil Singh


Abstract
This paper presents the 10th and 11th place system for Subtask A -English and Subtask A Arabic respectively of the SemEval 2022 -Task 6. The purpose of the Subtask A was to classify a given text sequence into sarcastic and nonsarcastic. We also breifly cover our method for Subtask B which performed subpar when compared with most of the submissions on the official leaderboard . All of the developed solutions used a transformers based language model for encoding the text sequences with necessary changes of the pretrained weights and classifier according to the language and subtask at hand .
Anthology ID:
2022.semeval-1.127
Volume:
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Guy Emerson, Natalie Schluter, Gabriel Stanovsky, Ritesh Kumar, Alexis Palmer, Nathan Schneider, Siddharth Singh, Shyam Ratan
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
907–911
Language:
URL:
https://aclanthology.org/2022.semeval-1.127
DOI:
10.18653/v1/2022.semeval-1.127
Bibkey:
Cite (ACL):
Nikhil Singh. 2022. niksss at SemEval-2022 Task 6: Are Traditionally Pre-Trained Contextual Embeddings Enough for Detecting Intended Sarcasm ?. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 907–911, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
niksss at SemEval-2022 Task 6: Are Traditionally Pre-Trained Contextual Embeddings Enough for Detecting Intended Sarcasm ? (Singh, SemEval 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.semeval-1.127.pdf
Data
TweetEval