Rajagopal Eswari
2020
CIA_NITT at WNUT-2020 Task 2: Classification of COVID-19 Tweets Using Pre-trained Language Models
Yandrapati Prakash Babu
|
Rajagopal Eswari
Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)
This paper presents our models for WNUT2020 shared task2. The shared task2 involves identification of COVID-19 related informative tweets. We treat this as binary text clas-sification problem and experiment with pre-trained language models. Our first model which is based on CT-BERT achieves F1-scoreof 88.7% and second model which is an ensemble of CT-BERT, RoBERTa and SVM achieves F1-score of 88.52%.
Identification of Medication Tweets Using Domain-specific Pre-trained Language Models
Yandrapati Prakash Babu
|
Rajagopal Eswari
Proceedings of the Fifth Social Media Mining for Health Applications Workshop & Shared Task
In this paper, we present our approach for task1 of SMM4H 2020. This task involves automatic classification of tweets mentioning medication or dietary supplements. For this task, we experiment with pre-trained models like Biomedical RoBERTa, Clinical BERT and Biomedical BERT. Our approach achieves F1-score of 73.56%.
Search