Enolp musk@SMM4H’22 : Leveraging Pre-trained Language Models for Stance And Premise Classification

Millon Das, Archit Mangrulkar, Ishan Manchanda, Manav Kapadnis, Sohan Patnaik


Abstract
This paper covers our approaches for the Social Media Mining for Health (SMM4H) Shared Tasks 2a and 2b. Apart from the baseline architectures, we experiment with Parts of Speech (PoS), dependency parsing, and Tf-Idf features. Additionally, we perform contrastive pretraining on our best models using a supervised contrastive loss function. In both the tasks, we outperformed the mean and median scores and ranked first on the validation set. For stance classification, we achieved an F1-score of 0.636 using the CovidTwitterBERT model, while for premise classification, we achieved an F1-score of 0.664 using BART-base model on test dataset.
Anthology ID:
2022.smm4h-1.42
Volume:
Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Graciela Gonzalez-Hernandez, Davy Weissenbacher
Venue:
SMM4H
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
156–159
Language:
URL:
https://aclanthology.org/2022.smm4h-1.42
DOI:
Bibkey:
Cite (ACL):
Millon Das, Archit Mangrulkar, Ishan Manchanda, Manav Kapadnis, and Sohan Patnaik. 2022. Enolp musk@SMM4H’22 : Leveraging Pre-trained Language Models for Stance And Premise Classification. In Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task, pages 156–159, Gyeongju, Republic of Korea. Association for Computational Linguistics.
Cite (Informal):
Enolp musk@SMM4H’22 : Leveraging Pre-trained Language Models for Stance And Premise Classification (Das et al., SMM4H 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.smm4h-1.42.pdf
Code
 architmang/enolp_musk-smm4h_coling2022