Smash at SemEval-2020 Task 7: Optimizing the Hyperparameters of ERNIE 2.0 for Humor Ranking and Rating

J. A. Meaney, Steven Wilson, Walid Magdy


Abstract
The use of pre-trained language models such as BERT and ULMFiT has become increasingly popular in shared tasks, due to their powerful language modelling capabilities. Our entry to SemEval uses ERNIE 2.0, a language model which is pre-trained on a large number of tasks to enrich the semantic and syntactic information learned. ERNIE’s knowledge masking pre-training task is a unique method for learning about named entities, and we hypothesise that it may be of use in a dataset which is built on news headlines and which contains many named entities. We optimize the hyperparameters in a regression and classification model and find that the hyperparameters we selected helped to make bigger gains in the classification model than the regression model.
Anthology ID:
2020.semeval-1.137
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Venues:
COLING | SemEval
SIGs:
SIGSEM | SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
1049–1054
Language:
URL:
https://aclanthology.org/2020.semeval-1.137
DOI:
10.18653/v1/2020.semeval-1.137
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.137.pdf