Christopher Palmer
2024
Enhancing Social Media Health Prediction Certainty by Integrating Large Language Models with Transformer Classifiers
Sedigh Khademi
|
Christopher Palmer
|
Muhammad Javed
|
Jim Buttery
|
Gerardo Dimaguila
Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks
This paper presents our approach for SMM4H 2024 Task 5, focusing on identifying tweets where users discuss their child’s health conditions of ADHD, ASD, delayed speech, or asthma. Our approach uses a pipeline that combines transformer-based classifiers and GPT-4 large language models (LLMs). We first address data imbalance in the training set using topic modelling and under-sampling. Next, we train RoBERTa-based classifiers on the adjusted data. Finally, GPT-4 refines the classifier’s predictions for uncertain cases (confidence below 0.9). This strategy achieved significant improvement over the baseline RoBERTa models. Our work demonstrates the effectiveness of combining transformer classifiers and LLMs for extracting health insights from social media conversations.
2022
CHAAI@SMM4H’22: RoBERTa, GPT-2 and Sampling - An interesting concoction
Christopher Palmer
|
Sedigheh Khademi Habibabadi
|
Muhammad Javed
|
Gerardo Luis Dimaguila
|
Jim Buttery
Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task
This paper describes the approaches to the SMM4H 2022 Shared Tasks that were taken by our team for tasks 1 and 6. Task 6 was the “Classification of tweets which indicate self-reported COVID-19 vaccination status (in English)”. The best test F1 score was 0.82 using a CT-BERT model, which exceeded the median test F1 score of 0.77, and was close to the 0.83 F1 score of the SMM4H baseline model. Task 1 was described as the “Classification, detection and normalization of Adverse Events (AE) mentions in tweets (in English)”. We undertook task 1a, and with a RoBERTa-base model achieved an F1 Score of 0.61 on test data, which exceeded the mean test F1 for the task of 0.56.
Search
Co-authors
- Muhammad Javed 2
- Jim Buttery 2
- Sedigh Khademi 1
- Gerardo Dimaguila 1
- Sedigheh Khademi Habibabadi 1
- show all...