%0 Conference Proceedings %T FHAC at GermEval 2021: Identifying German toxic, engaging, and fact-claiming comments with ensemble learning %A Bornheim, Tobias %A Grieger, Niklas %A Bialonski, Stephan %Y Risch, Julian %Y Stoll, Anke %Y Wilms, Lena %Y Wiegand, Michael %S Proceedings of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments %D 2021 %8 September %I Association for Computational Linguistics %C Duesseldorf, Germany %F bornheim-etal-2021-fhac %X The availability of language representations learned by large pretrained neural network models (such as BERT and ELECTRA) has led to improvements in many downstream Natural Language Processing tasks in recent years. Pretrained models usually differ in pretraining objectives, architectures, and datasets they are trained on which can affect downstream performance. In this contribution, we fine-tuned German BERT and German ELECTRA models to identify toxic (subtask 1), engaging (subtask 2), and fact-claiming comments (subtask 3) in Facebook data provided by the GermEval 2021 competition. We created ensembles of these models and investigated whether and how classification performance depends on the number of ensemble members and their composition. On out-of-sample data, our best ensemble achieved a macro-F1 score of 0.73 (for all subtasks), and F1 scores of 0.72, 0.70, and 0.76 for subtasks 1, 2, and 3, respectively. %U https://aclanthology.org/2021.germeval-1.16 %P 105-111