Unimodal Intermediate Training for Multimodal Meme Sentiment Classification

Muzhaffar Hazman, Susan McKeever, Josephine Griffith


Abstract
Internet Memes remain a challenging form of user-generated content for automated sentiment classification. The availability of labelled memes is a barrier to developing sentiment classifiers of multimodal memes. To address the shortage of labelled memes, we propose to supplement the training of a multimodal meme classifier with unimodal (image-only and text-only) data. In this work, we present a novel variant of supervised intermediate training that uses relatively abundant sentiment-labelled unimodal data. Our results show a statistically significant performance improvement from the incorporation of unimodal text data. Furthermore, we show that the training set of labelled memes can be reduced by 40% without reducing the performance of the downstream model.
Anthology ID:
2023.ranlp-1.55
Volume:
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
Month:
September
Year:
2023
Address:
Varna, Bulgaria
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
494–506
Language:
URL:
https://aclanthology.org/2023.ranlp-1.55
DOI:
Bibkey:
Cite (ACL):
Muzhaffar Hazman, Susan McKeever, and Josephine Griffith. 2023. Unimodal Intermediate Training for Multimodal Meme Sentiment Classification. In Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing, pages 494–506, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
Unimodal Intermediate Training for Multimodal Meme Sentiment Classification (Hazman et al., RANLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.ranlp-1.55.pdf