Identification of Medication Tweets Using Domain-specific Pre-trained Language Models

Yandrapati Prakash Babu, Rajagopal Eswari


Abstract
In this paper, we present our approach for task1 of SMM4H 2020. This task involves automatic classification of tweets mentioning medication or dietary supplements. For this task, we experiment with pre-trained models like Biomedical RoBERTa, Clinical BERT and Biomedical BERT. Our approach achieves F1-score of 73.56%.
Anthology ID:
2020.smm4h-1.22
Volume:
Proceedings of the Fifth Social Media Mining for Health Applications Workshop & Shared Task
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Graciela Gonzalez-Hernandez, Ari Z. Klein, Ivan Flores, Davy Weissenbacher, Arjun Magge, Karen O'Connor, Abeed Sarker, Anne-Lyse Minard, Elena Tutubalina, Zulfat Miftahutdinov, Ilseyar Alimova
Venue:
SMM4H
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
128–130
Language:
URL:
https://aclanthology.org/2020.smm4h-1.22
DOI:
Bibkey:
Cite (ACL):
Yandrapati Prakash Babu and Rajagopal Eswari. 2020. Identification of Medication Tweets Using Domain-specific Pre-trained Language Models. In Proceedings of the Fifth Social Media Mining for Health Applications Workshop & Shared Task, pages 128–130, Barcelona, Spain (Online). Association for Computational Linguistics.
Cite (Informal):
Identification of Medication Tweets Using Domain-specific Pre-trained Language Models (Prakash Babu & Eswari, SMM4H 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.smm4h-1.22.pdf