MultiClimate: Multimodal Stance Detection on Climate Change Videos

Jiawen Wang, Longfei Zuo, Siyao Peng, Barbara Plank


Abstract
Climate change (CC) has attracted increasing attention in NLP in recent years. However, detecting the stance on CC in multimodal data is understudied and remains challenging due to a lack of reliable datasets. To improve the understanding of public opinions and communication strategies, this paper presents MultiClimate, the first open-source manually-annotated stance detection dataset with 100 CC-related YouTube videos and 4,209 frame-transcript pairs. We deploy state-of-the-art vision and language models, as well as multimodal models for MultiClimate stance detection. Results show that text-only BERT significantly outperforms image-only ResNet50 and ViT. Combining both modalities achieves state-of-the-art, 0.747/0.749 in accuracy/F1. Our 100M-sized fusion models also beat CLIP and BLIP, as well as the much larger 9B-sized multimodal IDEFICS and text-only Llama3 and Gemma2, indicating that multimodal stance detection remains challenging for large language models. Our code, dataset, as well as supplementary materials, are available at https://github.com/werywjw/MultiClimate.
Anthology ID:
2024.nlp4pi-1.27
Volume:
Proceedings of the Third Workshop on NLP for Positive Impact
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Daryna Dementieva, Oana Ignat, Zhijing Jin, Rada Mihalcea, Giorgio Piatti, Joel Tetreault, Steven Wilson, Jieyu Zhao
Venue:
NLP4PI
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
315–326
Language:
URL:
https://aclanthology.org/2024.nlp4pi-1.27
DOI:
Bibkey:
Cite (ACL):
Jiawen Wang, Longfei Zuo, Siyao Peng, and Barbara Plank. 2024. MultiClimate: Multimodal Stance Detection on Climate Change Videos. In Proceedings of the Third Workshop on NLP for Positive Impact, pages 315–326, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
MultiClimate: Multimodal Stance Detection on Climate Change Videos (Wang et al., NLP4PI 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.nlp4pi-1.27.pdf