Lucie Gattepaille


2020

pdf bib
How Far Can We Go with Just Out-of-the-box BERT Models?
Lucie Gattepaille
Proceedings of the Fifth Social Media Mining for Health Applications Workshop & Shared Task

Social media have been seen as a promising data source for pharmacovigilance. Howev-er, methods for automatic extraction of Adverse Drug Reactions from social media plat-forms such as Twitter still need further development before they can be included reliably in routine pharmacovigilance practices. As the Bidirectional Encoder Representations from Transformer (BERT) models have shown great performance in many major NLP tasks recently, we decided to test its performance on the SMM4H Shared Tasks 1 to 3, by submitting results of pretrained and fine-tuned BERT models without more added knowledge than the one carried in the training datasets and additional datasets. Our three submissions all ended up above average over all teams’ submissions: 0.766 F1 for task 1 (15% above the average of 0.665), 0.47 F1 for task 2 (2% above the average of 0.46) and 0.380 F1 score for task 3 (30% above the average of 0.292). Used in many of the high-ranking submission in the 2019 edition of the SMM4H Shared Task, BERT contin-ues to be state-of-the-art in ADR extraction for Twitter data.
Search
Co-authors
    Venues