DuluthNLP at SemEval-2021 Task 7: Fine-Tuning RoBERTa Model for Humor Detection and Offense Rating

Samuel Akrah


Abstract
This paper presents the DuluthNLP submission to Task 7 of the SemEval 2021 competition on Detecting and Rating Humor and Offense. In it, we explain the approach used to train the model together with the process of fine-tuning our model in getting the results. We focus on humor detection, rating, and of-fense rating, representing three out of the four subtasks that were provided. We show that optimizing hyper-parameters for learning rate, batch size and number of epochs can increase the accuracy and F1 score for humor detection
Anthology ID:
2021.semeval-1.169
Volume:
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Alexis Palmer, Nathan Schneider, Natalie Schluter, Guy Emerson, Aurelie Herbelot, Xiaodan Zhu
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
1196–1203
Language:
URL:
https://aclanthology.org/2021.semeval-1.169
DOI:
10.18653/v1/2021.semeval-1.169
Bibkey:
Cite (ACL):
Samuel Akrah. 2021. DuluthNLP at SemEval-2021 Task 7: Fine-Tuning RoBERTa Model for Humor Detection and Offense Rating. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 1196–1203, Online. Association for Computational Linguistics.
Cite (Informal):
DuluthNLP at SemEval-2021 Task 7: Fine-Tuning RoBERTa Model for Humor Detection and Offense Rating (Akrah, SemEval 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.semeval-1.169.pdf
Code
 akrahdan/semeval2021