SU-NLP at WNUT-2020 Task 2: The Ensemble Models

Kenan Fayoumi, Reyyan Yeniterzi


Abstract
In this paper, we address the problem of identifying informative tweets related to COVID-19 in the form of a binary classification task as part of our submission for W-NUT 2020 Task 2. Specifically, we focus on ensembling methods to boost the classification performance of classification models such as BERT and CNN. We show that ensembling can reduce the variance in performance, specifically for BERT base models.
Anthology ID:
2020.wnut-1.61
Volume:
Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)
Month:
November
Year:
2020
Address:
Online
Editors:
Wei Xu, Alan Ritter, Tim Baldwin, Afshin Rahimi
Venue:
WNUT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
423–427
Language:
URL:
https://aclanthology.org/2020.wnut-1.61
DOI:
10.18653/v1/2020.wnut-1.61
Bibkey:
Cite (ACL):
Kenan Fayoumi and Reyyan Yeniterzi. 2020. SU-NLP at WNUT-2020 Task 2: The Ensemble Models. In Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020), pages 423–427, Online. Association for Computational Linguistics.
Cite (Informal):
SU-NLP at WNUT-2020 Task 2: The Ensemble Models (Fayoumi & Yeniterzi, WNUT 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.wnut-1.61.pdf
Data
WNUT-2020 Task 2