%0 Conference Proceedings %T DuluthNLP at SemEval-2021 Task 7: Fine-Tuning RoBERTa Model for Humor Detection and Offense Rating %A Akrah, Samuel %Y Palmer, Alexis %Y Schneider, Nathan %Y Schluter, Natalie %Y Emerson, Guy %Y Herbelot, Aurelie %Y Zhu, Xiaodan %S Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021) %D 2021 %8 August %I Association for Computational Linguistics %C Online %F akrah-2021-duluthnlp %X This paper presents the DuluthNLP submission to Task 7 of the SemEval 2021 competition on Detecting and Rating Humor and Offense. In it, we explain the approach used to train the model together with the process of fine-tuning our model in getting the results. We focus on humor detection, rating, and of-fense rating, representing three out of the four subtasks that were provided. We show that optimizing hyper-parameters for learning rate, batch size and number of epochs can increase the accuracy and F1 score for humor detection %R 10.18653/v1/2021.semeval-1.169 %U https://aclanthology.org/2021.semeval-1.169 %U https://doi.org/10.18653/v1/2021.semeval-1.169 %P 1196-1203