Argumentative Stance Prediction: An Exploratory Study on Multimodality and Few-Shot Learning

Arushi Sharma, Abhibha Gupta, Maneesh Bilalpur


Abstract
To advance argumentative stance prediction as a multimodal problem, the First Shared Task in Multimodal Argument Mining hosted stance prediction in crucial social topics of gun control and abortion. Our exploratory study attempts to evaluate the necessity of images for stance prediction in tweets and compare out-of-the-box text-based large-language models (LLM) in few-shot settings against fine-tuned unimodal and multimodal models. Our work suggests an ensemble of fine-tuned text-based language models (0.817 F1-score) outperforms both the multimodal (0.677 F1-score) and text-based few-shot prediction using a recent state-of-the-art LLM (0.550 F1-score). In addition to the differences in performance, our findings suggest that the multimodal models tend to perform better when image content is summarized as natural language over their native pixel structure and, using in-context examples improves few-shot learning of LLMs performance.
Anthology ID:
2023.argmining-1.18
Volume:
Proceedings of the 10th Workshop on Argument Mining
Month:
December
Year:
2023
Address:
Singapore
Editors:
Milad Alshomary, Chung-Chi Chen, Smaranda Muresan, Joonsuk Park, Julia Romberg
Venues:
ArgMining | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
167–174
Language:
URL:
https://aclanthology.org/2023.argmining-1.18
DOI:
10.18653/v1/2023.argmining-1.18
Bibkey:
Cite (ACL):
Arushi Sharma, Abhibha Gupta, and Maneesh Bilalpur. 2023. Argumentative Stance Prediction: An Exploratory Study on Multimodality and Few-Shot Learning. In Proceedings of the 10th Workshop on Argument Mining, pages 167–174, Singapore. Association for Computational Linguistics.
Cite (Informal):
Argumentative Stance Prediction: An Exploratory Study on Multimodality and Few-Shot Learning (Sharma et al., ArgMining-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.argmining-1.18.pdf