AILAB-Udine@SMM4H’22: Limits of Transformers and BERT Ensembles

Beatrice Portelli, Simone Scaboro, Emmanuele Chersoni, Enrico Santus, Giuseppe Serra


Abstract
This paper describes the models developed by the AILAB-Udine team for the SMM4H’22 Shared Task. We explored the limits of Transformer based models on text classification, entity extraction and entity normalization, tackling Tasks 1, 2, 5, 6 and 10. The main takeaways we got from participating in different tasks are: the overwhelming positive effects of combining different architectures when using ensemble learning, and the great potential of generative models for term normalization.
Anthology ID:
2022.smm4h-1.36
Volume:
Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Graciela Gonzalez-Hernandez, Davy Weissenbacher
Venue:
SMM4H
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
130–134
Language:
URL:
https://aclanthology.org/2022.smm4h-1.36
DOI:
Bibkey:
Cite (ACL):
Beatrice Portelli, Simone Scaboro, Emmanuele Chersoni, Enrico Santus, and Giuseppe Serra. 2022. AILAB-Udine@SMM4H’22: Limits of Transformers and BERT Ensembles. In Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task, pages 130–134, Gyeongju, Republic of Korea. Association for Computational Linguistics.
Cite (Informal):
AILAB-Udine@SMM4H’22: Limits of Transformers and BERT Ensembles (Portelli et al., SMM4H 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.smm4h-1.36.pdf