SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffensEval 2020)

We present the results and the main findings of SemEval-2020 Task 12 on Multilingual Offensive Language Identification in Social Media (OffensEval-2020). The task included three subtasks corresponding to the hierarchical taxonomy of the OLID schema from OffensEval-2019, and it was offered in five languages: Arabic, Danish, English, Greek, and Turkish. OffensEval-2020 was one of the most popular tasks at SemEval-2020, attracting a large number of participants across all subtasks and languages: a total of 528 teams signed up to participate in the task, 145 teams submitted official runs on the test data, and 70 teams submitted system description papers.


Introduction
Offensive language is ubiquitous in social media platforms such as Facebook, Twitter, and Reddit, and it comes in many forms. Given the multitude of terms and definitions related to offensive language used in the literature, several recent studies have investigated the common aspects of different abusive language detection tasks (Waseem et al., 2017;Wiegand et al., 2018). One such example is SemEval-2019 Task 6: OffensEval 1 (Zampieri et al., 2019b), which is the precursor to the present shared task. OffensEval-2019 used the Offensive Language Identification Dataset (OLID), which contains over 14,000 English tweets annotated using a hierarchical three-level annotation schema that takes both the target and the type of offensive content into account (Zampieri et al., 2019a). The assumption behind this annotation schema is that the target of offensive messages is an important variable that allows us to discriminate between, e.g., hate speech, which often consists of insults targeted toward a group, and cyberbullying, which typically targets individuals. A number of recently organized related shared tasks followed similar hierarchical models. Examples include HASOC-2019 (Mandl et al., 2019) for English, German, and Hindi, HatEval-2019 (Basile et al., 2019) for English and Spanish, GermEval-2019 for German (Struß et al., 2019), and TRAC-2020 (Kumar et al., 2020) for English, Bengali, and Hindi.
OffensEval-2019 attracted nearly 800 team registrations and received 115 official submissions, which demonstrates the interest of the research community in this topic. Therefore, we organized a follow-up, OffensEval-2020 2 (SemEval-2020 Task 12), which is described in this report, building on the success of OffensEval-2019 with several improvements. In particular, we used the same three-level taxonomy to annotate new datasets in five languages, where each level in this taxonomy corresponds to a subtask in the competition: • Subtask A: Offensive language identification; • Subtask B: Automatic categorization of offense types; • Subtask C: Offense target identification. The contributions of OffensEval-2020 can be summarized as follows: • We provided the participants with a new, large-scale semi-supervised training dataset containing over nine million English tweets (Rosenthal et al., 2020).
• Compared to OffensEval-2019, we used larger test datasets for all subtasks.
Overall, OffensEval-2020 was a very successful task. The huge interest demonstrated last year continued this year, with 528 teams signing up to participate in the task, and 145 of them submitting official runs on the test dataset. Furthermore, OffensEval-2020 received 70 system description papers, which is an all-time record for a SemEval task.
The remainder of this paper is organized as follows: Section 2 describes the annotation schema. Section 3 presents the five datasets that we used in the competition. Sections 4-9 present the results and discuss the approaches taken by the participating systems for each of the five languages. Finally, Section 10 concludes and suggests some possible directions for future work.

Annotation Schema
OLID's annotation schema proposes a hierarchical modeling of offensive language. It classifies each example using the following three-level hierarchy: Level A -Offensive Language Detection Is the text offensive (OFF) or not offesive (NOT)?
NOT: text that is neither offensive, nor profane; OFF: text containing inappropriate language, insults, or threats.

Level B -Categorization of Offensive Language
Is the offensive text targeted (TIN) or untargeted (UNT)? TIN: targeted insults or threats towards a group or an individual; UNT: untargeted profanity or swearing.

Level C -Offensive Language Target Identification
Who or what is the target of the offensive content?
IND: the target is an individual, which can be explicitly mentioned or it can be implicit; GRP: the target is a group of people based on ethnicity, gender, sexual orientation, religious belief, or other common characteristic; OTH: the target does not fall into any of the previous categories, e.g., organizations, events, and issues.

Data
In this section, we describe the datasets for all five languages: Arabic, Danish, English, Greek, and Turkish. All of the languages follow the OLID annotation schema and all datasets were pre-processed in the same way, e.g., all user mentions were substituted by @USER for anonymization. The introduction of new languages using a standardized schema with the purpose of detecting offensive and targeted speech should improve dataset consistency. This strategy is in line with current best practices in abusive language data collection (Vidgen and Derczynski, 2020). All languages contain data for subtask A, and only English contains data for subtasks B and C. The distribution of the data across categories for all languages for subtask A is shown in Table 1, while Tables 2 and 3 present statistics about the data for the English subtasks B and C, respectively. Labeled examples from the different datasets are shown in Table 4.     English For English, we provided two datasets: OLID from OffensEval-2019 (Zampieri et al., 2019a), and SOLID, which is a new dataset we created for the task (Rosenthal et al., 2020). SOLID is an abbreviation for Semi-Supervised Offensive Language Identification Dataset, and it contains 9,089,140 English tweets, which makes it the largest dataset of its kind. For SOLID, we collected random tweets using the 20 most common English stopwords such as the, of, and, to, etc. Then, we labeled the collected tweets in a semi-supervised manner using democratic co-training, with OLID as a seed dataset. For the co-training, we used four models with different inductive biases: PMI (Turney and Littman, 2003), FastText (Joulin et al., 2017), LSTM (Hochreiter and Schmidhuber, 1997), and BERT (Devlin et al., 2019). We selected the OFF tweets for the test set using this semi-supervised process and we then annotated them manually for all subtasks. We further added 2,500 NOT tweets using this process without further annotation. We computed a Fleiss' κ Inter-Annotator Agreement (IAA) on a small subset of instances that were predicted to be OFF, and obtained 0.988 for Level A (almost perfect agreement), 0.818 for Level B (substantial agreement), and 0.630 for Level C (moderate agreement). The annotation for Level C was more challenging as it is 3-way and also as sometimes there could be different types of targets mentioned in the offensive tweet, but the annotators were forced to choose only one label.
Arabic The Arabic dataset consists of 10,000 tweets collected in April-May 2019 using the Twitter API with the language filter set to Arabic: lang:ar. In order to increase the chance of having offensive content, only tweets with two or more vocative particles (yA in Arabic) were considered for annotation; the vocative particle is used mainly to direct the speech to a person or to a group, and it is widely observed in offensive communications in almost all Arabic dialects. This yielded 20% offensive tweets in the final dataset. The tweets were manually annotated (for Level A only) by a native speaker familiar with several Arabic dialects. A random subsample of offensive and non-offensive tweets were doubly annotated and the Fleiss κ IAA was found to be 0.92. More details can be found in (Mubarak et al., 2020b).
Danish The Danish dataset consists of 3,600 comments drawn from Facebook, Reddit, and a local newspaper, Ekstra Bladet 3 . The selection of the comments was partially seeded using abusive terms gathered during a crowd-sourced lexicon compilation; in order to ensure sufficient data diversity, this seeding was limited to half the data only. The training data was not divided into distinct training/development splits, and participants were encouraged to perform cross-validation, as we wanted to avoid issues that fixed splits can cause (Gorman and Bedrick, 2019). The annotation (for Level A only) was performed at the individual comment level by males aged 25-40. A full description of the dataset and an accompanying data statement (Bender and Friedman, 2018) can be found in (Sigurbergsson and Derczynski, 2020).
Greek The Offensive Greek Twitter Dataset (OGTD) used in this task is a compilation of 10,287 tweets. These tweets were sampled using popular and trending hashtags, including television programs such as series, reality and entertainment shows, along with some politically related tweets. Another portion of the dataset was fetched using pejorative terms and "you are" as keywords. This particular strategy was adopted with the hypothesis that TV and politics would gather a handful of offensive posts, along with tweets containing vulgar language for further investigation. A team of volunteer annotators participated in the annotation process (for Level A only), with each tweet being judged by three annotators.
In cases of disagreement, labels with majority agreement above 66% were selected as the actual tweet labels. The IAA was 0.78 (using Fleiss' κ coefficient). A full description of the dataset collection and annotation is detailed in (Pitenis et al., 2020).
Turkish The Turkish dataset consists of over 35,000 tweets sampled uniformly from the Twitter stream and filtered using a list of the most frequent words in Turkish, as identified by Twitter. The tweets were annotated by volunteers (for Level A only). Most tweets were annotated by a single annotator. The Cohen's κ IAA calculated on 5,000 doubly-annotated tweets was 0.761. Note that we did not include any specific method for spotting offensive language, e.g., filtering by offensive words, or following usual targets of offensive language. As a result, the distribution closely resembles the actual offensive language use on Twitter, with more non-offensive tweets than offensive tweets. More details about the sampling and the annotation process can be found in (Çöltekin, 2020).

Task Participation
A total of 528 teams signed up to participate in the task, and 145 of them submitted results: 6 teams made submissions for all five languages, 19 did so for four languages, 11 worked on three languages, 13 on two languages, and 96 focused on just one language. Tables 13, 14, and 15 show a summary of which team participated in which task. A total of 70 teams submitted system description papers, which are listed in Table 12. Below, we analyze the representation and the models used for all language tracks.

English Track
A total of 87 teams made submissions for the English track (23 of them participated in the 2019 edition of the task): 27 teams participated in all three English subtasks, 18 teams participated in two English subtasks, and 42 focused on one English subtask only.
Pre-processing and normalization Most teams performed some kind of pre-processing (67 teams) or text normalization (26 teams), which are typical steps when working with tweets. Text normalization included various text transformations such as converting emojis to plain text, 5 segmenting hashtags, 6 general tweet text normalization (Satapathy et al., 2019), abbreviation expansion, bad word replacement, error correction, lowercasing, stemming, and/or lemmatization. Other techniques included the removal of @user mentions, URLs, hashtags, emojis, emails, dates, numbers, punctuation, consecutive character repetitions, offensive words, and/or stop words.
Additional data Most teams found the weakly supervised SOLID dataset useful, and 58 teams ended up using it in their systems. Another six teams gave it a try, but could not benefit from it, and the remaining teams only used the manually annotated training data. Some teams used additional datasets from HASOC-2019 (Mandl et al., 2019), the Kaggle competitions on Detecting Insults in Social Commentary 7 and Toxic Comment Classification 8 , the TRAC-2018 shared task on Aggression Identification (Kumar et al., 2018a;Kumar et al., 2018b), the Wikipedia Detox dataset (Wulczyn et al., 2017), and the datasets from  and (Wulczyn et al., 2017), as well as some lexicons such as HurtLex (Bassignana et al., 2018) and Hatebase. 9 Finally, one team created their own dataset.

Subtask A
A total of 82 teams made submissions for subtask A, and the results can be seen in Table 5. This was the most popular subtask among all subtasks and across all languages. The best team UHH-LT achieved an F1 score of 0.9204 using an ensemble of ALBERT models of different sizes. The team ranked second was UHH-LT with an F1 score of 0.9204, and it used RoBERTa-large that was fine-tuned on the SOLID dataset in an unsupervised way, i.e., using the MLM objective. The third team, Galileo, achieved an F1 score of 0.9198, using an ensemble that combined XLM-RoBERTa-base and XLM-RoBERTa-large trained on the subtask A data for all languages. The top-10 teams used BERT, RoBERTa or XLM-RoBERTa, sometimes as part of ensembles that also included CNNs and LSTMs (Hochreiter and Schmidhuber, 1997). Overall, the competition for this subtask was very strong, and the scores are very close: the teams ranked 2-16 are within one point in the third decimal place, and those ranked 2-59 are within two absolute points in the second decimal place from the best team. All but one team beat the majority class baseline (we suspect that team might have accidentally flipped their predicted labels).  Table 5: Results for English subtask A, ordered by macro-averaged F1 in descending order.

Subtask B
A total of 41 teams made submissions for subtask B, and the results can be seen in Table 6. The best team is Galileo (which were third on subtask A), whose ensemble model achieved an F1 score of 0.7462. The second-place team, PGSG, used a complex teacher-student architecture built on top of a BERT-LSTM model, which was fine-tuned on the SOLID dataset in an unsupervised way, i.e., optimizing for the MLM objective. NTU NLP was ranked third with an F1 score of 0.6906. They tackled subtasks A, B, and C as part of a multi-task BERT-based model. Overall, the differences in the scores for subtask B are much larger than for subtask A. For example, the 4th team is two points behind the third one and seven points behind the first one. The top-ranking teams used BERT-based Transformer models, and all but four teams could improve over the majority class baseline.

Subtask C
A total of 37 teams made submissions for subtask C and the results are shown in Table 7. The best team was once again Galileo, with an F1 score of 0.7145. LT@Helsinki was ranked second with an F1 score of 0.6700. They used fine-tuned BERT with oversampling to improve class imbalance. The third best system was PRHLT-UPV with an F1 score of 0.6692, which combines BERT with hand-crafted features; it is followed very closely by UHH-LT at rank 4, which achieved an F1 score of 0.6683. This subtask is also dominated by BERT-based models, and all teams outperformed the majority class baseline.
Note that the absolute F1-scores obtained by the best teams in the English subtasks A and C are substantially higher than the scores obtained by the best teams in OffensEval-2019: 0.9223 vs. 0.8290 for subtask A and 0.7145 vs. 0.6600 for subtask C. This suggests that the much larger SOLID dataset made available in OffensEval-2020 helped the models make more accurate predictions.   Table 7: Results for English subtask C, ordered by macro-averaged F1 in descending order.
Furthermore, it suggests that the weakly supervised method used to compile and annotate SOLID is a viable alternative to popular purely manual annotation approaches. A more detailed analysis of the systems' performances will be carried out in order to determine the contribution of the SOLID dataset for the results.

Best Systems
We provide some more details about the approaches used by the top teams for each subtask. We use subindices to show their rank for each subtask. Additional summaries for some of the best teams can be found in Appendix A.
Galileo (A:3,B:1,C:1) This team was ranked 3rd, 1st, and 1st on the English subtasks A, B, and C, respectively. This is also the only team ranked among the top-3 across all languages. For subtask A, they used multi-lingual pre-trained Transformers based on XLM-RoBERTa, followed by multi-lingual fine-tuning using the OffensEval data. Ultimately, they submitted an ensemble that combined XLM-RoBERTa-base and XLM-RoBERTa-large, achieving an F1 score of 0.9198. For subtasks B and C, they used knowledge distillation in a teacher-student framework, using Transformers such as ALBERT and ERNIE 2.0  as teacher models, achieving an F1 score of 0.7462 and 0.7145, for subtasks B and C respectively.

UHH-LT (A:1)
This team was ranked 1st on subtask A with an F1 score of 0.9223. They fine-tuned different Transformer models on the OLID training data, and then combined them into an ensemble. They experimented with BERT-base and BERT-large (uncased), RoBERTa-base and RoBERTa-large, XLM-RoBERTa, and four different ALBERT models (large-v1, large-v2, xxlarge-v1, and xxlarge-v2). In their official submission, they used an ensemble combining different ALBERT models. They did not use the labels of the SOLID dataset, but found the tweets it contained nevertheless useful for unsupervised fine-tuning (i.e., using the MLM objective) of the pre-trained Transformers.

Arabic Track
A total of 108 teams registered to participate in the Arabic track, and ultimately 53 teams entered the competition with at least one valid submission. Among them, ten teams participated in the Arabic track only, while the rest participated in other languages in addition to Arabic. This was the second shared task for Arabic after the one at the 4th workshop on Open-Source Arabic Corpora and Processing Tools (Mubarak et al., 2020a), which had different settings and less participating teams.
Pre-processing and normalization Most teams performed some kind of pre-processing or text normalization, e.g., Hamza shapes, Alif Maqsoura, Taa Marbouta, diacritics, non-Arabic characters, etc., and only one team replaced emojis with their textual counter-parts. Table 8 shows the teams and the F1 scores they achieved for the Arabic subtask A. The majority class baseline had an F1 score of 0.4441, and several teams achieved results that doubled that baseline score. The best-performing team was ALAMIHamza with an F1 score of 0.9017. The second-best team, ALT, was almost tied with the winner, with an F1 score of 0.9016. The Galileo team was third with an F1 score of 0.8989. A summary of the approaches taken by the top-performing teams can be found in Appendix A; here we briefly describe the winning system:

Results
ALAMIHamza(A:1) The winning team achieved the highest F1-score using BERT to encode Arabic tweets, followed by a sigmoid classifier. They further performed translation of the meaning of emojis.

Danish Track
A total of 72 teams registered to participate in the Danish track, and 39 of them actually made official submissions on the test dataset. This is the first shared task on offensive language identification to include Danish, and the dataset provided to the OffensEval-2020 participants is an extended version of the one from (Sigurbergsson and Derczynski, 2020).
Pre-processing and normalization Many teams used the pre-processing included in the relevant embedding model, e.g., BPE (Heinzerling and Strube, 2018) and WordPiece. Other pre-processing techniques included emoji normalization, spelling correction, sentiment tagging, lexical and regex-based term and phrase flagging, and hashtag segmentation.

Results
The results are shown in Table 9. We can see that all teams managed to outperform the majority class baseline. Moreover, all but one team improved over a FastText baseline (F1 = 0.5148), and most teams achieved an F1 score of 0.7 or higher. Interestingly, one of the top-ranked teams, JCT, was entirely non-neural.
LT@Helsinki (A:1) The winning team LT@Helsinki used NordicBERT for representation, as provided by BotXO. 10 NordicBERT is customized to Danish, and avoids some of the pre-processing noise and ambiguity introduced by other popular BERT implementations. The team further reduced orthographic lengthening to maximum two repeated characters, converted emojis to sentiment scores, and used cooccurrences of hashtags and references to usernames. They tuned the hyper-parameters of their model using 10-fold cross validation.

Greek Track
A total of 71 teams registered to participate in the Greek track, and ultimately 37 of them made an official submission on the test dataset. This is the first shared task on offensive language identification to include Greek, and the dataset provided to the OffensEval-2020 participants is an extended version of the one from (Pitenis et al., 2020).
Pre-processing and normalization The participants experimented with various pre-processing and text normalization techniques, similarly to what was done for the other languages above. One team further reported replacement of emojis with their textual equivalent.  Table 10: Results for Greek subtask A, ordered by macro-averaged F1 in descending order.

Results
The evaluation results are shown in Table 10. The top team, NLPDove, achieved an F1 score of 0.852, with Galileo coming close at the second place with an F1 score of 0.851. The KS@LTH team was ranked third with an F1 score of 0.848. It is no surprise that the majority of the high-ranking submissions and participants used large-scale pre-trained Transformers, with BERT being the most prominent among them, along with wordwvec-style non-contextualized pre-trained word embeddings.
NLPDove (A:1) The winning team NLPDove used pre-trained word embeddings from mBERT, which they fine-tuned using the training data. A domain-specific vocabulary was generated by running the WordPiece algorithm (Schuster and Nakajima, 2012) and using embeddings for extended vocabulary to pre-train and fine-tune the model.

Turkish Track
A total of 86 teams registered to participate in the Turkish track, and ultimately 46 of them made an official submission on the test dataset. All teams except for one participated in at least one other track. This is the first shared task on offensive language identification to include Turkish, and the dataset provided to the OffensEval-2020 participants is an extended version of the one from (Çöltekin, 2020).

Results
The results are shown in Table 11. We can see that team Galileo achieved the highest macro-averaged F1 score of 0.8258, followed by SU-NLP and KUI-SAIL with F1 scores of 0.8167 and 0.8141, respectively. Note that the latter two teams are from Turkey, and they used some language-specific resources and tuning. Most results were in the interval 0.7-0.8, and almost all teams managed to outperform the majority class baseline, which had an F1 score of 0.4435.
Galileo (A:1) The best team in the Turkish subtask A was Galileo, which achieved top results in several other tracks. Unlike the systems ranked second and third, Galileo's system is language-agnostic, and it used data for all five languages in a multi-lingual training setup.

Conclusion and Future Work
We presented the results of OffensEval-2020, which featured datasets in five languages: Arabic, Danish, English, Greek, and Turkish. For English, we had three subtasks, representing the three levels of the OLID hierarchy. For the other four languages, we had a subtask for the top-level of the OLID hierarchy only. A total of 528 teams signed up to participate in OffensEval-2020, and 145 of them actually submitted results across all languages and subtasks.  Out of the 145 participating teams, 96 teams participated in one language only, 13 teams participated in two languages, 11 in three languages, 19 in four languages, and 6 teams submitted systems for all five languages. The official submissions per language ranged from 37 (for Greek) to 81 (for English). Finally, 70 of the 145 participating teams submitted system description papers, which is an all-time record.
The wide participation in the task allowed us to compare a number of approaches across different languages and datasets. Similarly to OffensEval-2019, we observed that the best systems for all languages and subtasks used large-scale BERT-style pre-trained Transformers such as BERT, RoBERTa, and mBERT. Unlike 2019, however, the multi-lingual nature of this year's data enabled cross-language approaches, which proved quite effective and were used by some of the top-ranked systems.
In future work, we plan to extend the task in several ways. First, we want to offer subtasks B and C for all five languages from OffensEval-2020. We further plan to add some additional languages, especially under-represented ones. Other interesting aspects to explore are code-mixing, e.g., mixing Arabic script and Latin alphabet in the same Arabic message, and code-switching, e.g., mixing Arabic and English words and phrases in the same message. Last but not least, we plan to cover a wider variety of social media platforms.

A Best-Performing Teams
Below we present a short overview of the top-3 systems for all subtasks and for all languages: Galileo (EN B:1, EN C:1, TR A:1; DK A:2, GR A:2; AR A:3, EN A:3) This team was ranked 3rd, 1st, and 1st on the English subtasks A, B, and C, respectively; it was also ranked 1st for Turkish, 2nd for Greek and 3rd for Arabic and Danish. This is the only team ranked among the top-3 across all languages. For subtask A (all languages), they used multi-lingual pre-trained Transformers based on XLM-RoBERTa, followed by multi-lingual fine-tuning using the OffensEval data. Ultimately, they submitted an ensemble that combined XLM-RoBERTa-base and XLM-RoBERTa-large. For the English subtasks B and C, they used knowledge distillation in a teacher-student framework, using Transformers such as ALBERT and ERNIE 2.0  as teacher models.

UHH-LT (EN A:1)
This team was ranked 1st on the English subtask A. They fine-tuned different Transformer models on the OLID training data, and then combined them into an ensemble. They experimented with BERT-base and BERT-large (uncased), RoBERTa-base and RoBERTa-large, XLM-RoBERTa, and four different ALBERT models (large-v1, large-v2, xxlarge-v1, and xxlarge-v2). In their official submission, they used an ensemble combining different ALBERT models. They did not use the labels of the SOLID dataset, but found the tweets it contained nevertheless useful for unsupervised fine-tuning (i.e., using the MLM objective) of the pre-trained Transformers.
LT@Helsinki (DK A:1; EN C:2) This team was ranked 1st for Danish and 2nd for English subtask C. For Danish, they used NordicBERT, which is customized to Danish, and avoids some of the pre-processing noise and ambiguity introduced by other popular BERT implementations. The team further reduced orthographic lengthening to maximum two repeated characters, converted emojis to sentiment scores, and used co-occurrences of hashtags and references to usernames. They tuned the hyper-parameters of their model using 10-fold cross validation. For English subtask C, they used a very simple approach: over-sample the training data to overcome the class imbalance, and then fine-tune BERT-base-uncased.
NLPDove (GR A:1; DK A:3) This team was ranked 1st for Greek and 3rd for Danish. This team used extensive preprocessing and two data augmentation strategies: using additional semi-supervised labels from SOLID with different thresholds, and cross-lingual transfer with data selection. They further proposed and used a new metric, Translation Embedding Distance, in order to measure the transferability of instances for cross-lingual data selection. Moreover, they used data from different languages to finetune an mBERT model. Ultimately, they used a majority vote ensemble of several mBERT models, with minor variations in the parameters.
ALAMIHamza(AR A:1) This team was ranked 1st for Arabic. They used BERT to encode Arabic tweets, followed by a sigmoid classifier. They further performed translation of the meaning of emojis.

PGSG (EN B:2)
The team was ranked 2nd on the English subtask B. They first fine-tuned the BERT-Large, Uncased (Whole Word Masking) checkpoint using the tweets from SOLID, but ignoring their labels. For this, they optimized for the MLM objective only, without the Next Sentence Prediction loss in BERT. Then, they built a BERT-LSTM model using this fine-tuned BERT, and adding LSTM layers on top of it, together with the [CLS] token. Finally, they used this architecture to train a Noisy Student model using the SOLID data.
ALT (AR A:2) The team was ranked 2nd for Arabic. They used an ensemble of SVM, CNN-BiLSTM and Multilingual BERT. The SVMs used character n-grams, word n-grams, word embeddings as features, while the CNN-BiLSTM learned character embeddings and further used pre-trained word embeddings as input.

SU-NLP (TR A:2)
The team was ranked 2nd for Turkish. They used an ensemble of three different models: CNN-LSTM, BiLSTM-Attention, and BERT. They further used word embeddings, pre-trained on tweets, and BERTurk, BERT model for Turkish.