Anita Soloveva
2020
SO at SemEval-2020 Task 7: DeepPavlov Logistic Regression with BERT Embeddings vs SVR at Funniness Evaluation
Anita Soloveva
Proceedings of the Fourteenth Workshop on Semantic Evaluation
This paper describes my efforts in evaluating how editing news headlines can make them funnier within the frames of SemEval 2020 Task 7. I participated in both of the sub-tasks: Sub-Task 1 “Regression” and Sub-task 2 “Predict the funnier of the two edited versions of an original headline”. I experimented with a number of different models, but ended up using DeepPavlov logistic regression (LR) with BERT English cased embeddings for the first sub-task and support vector regression model (SVR) for the second. RMSE score obtained for the first task was 0.65099 and accuracy for the second – 0.32915.
2019
HAD-Tübingen at SemEval-2019 Task 6: Deep Learning Analysis of Offensive Language on Twitter: Identification and Categorization
Himanshu Bansal
|
Daniel Nagel
|
Anita Soloveva
Proceedings of the 13th International Workshop on Semantic Evaluation
This paper describes the submissions of our team, HAD-Tübingen, for the SemEval 2019 - Task 6: “OffensEval: Identifying and Categorizing Offensive Language in Social Media”. We participated in all the three sub-tasks: Sub-task A - “Offensive language identification”, sub-task B - “Automatic categorization of offense types” and sub-task C - “Offense target identification”. As a baseline model we used a Long short-term memory recurrent neural network (LSTM) to identify and categorize offensive tweets. For all the tasks we experimented with external databases in a postprocessing step to enhance the results made by our model. The best macro-average F1 scores obtained for the sub-tasks A, B and C are 0.73, 0.52, and 0.37, respectively.
Search