Tübingen at SemEval-2023 Task 4: What Can Stance Tell? A Computational Study on Detecting Human Values behind Arguments

Fidan Can


Abstract
This paper describes the performance of a system which uses stance as an output instead of taking it as an input to identify 20 human values behind given arguments, based on two datasets for SemEval-2023 Task 4. The rationale was to draw a conclusion on whether predicting stance would help predict the given human values better. For this setup—predicting 21 labels—a pre-trained language model, RoBERTa-Large was used. The system had an F$_1$-score of 0.50 for predicting these human values for the main test set while this score was 0.35 on the secondary test set, and through further analysis, this paper aims to give insight into the problem of human value identification.
Anthology ID:
2023.semeval-1.244
Volume:
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Atul Kr. Ojha, A. Seza Doğruöz, Giovanni Da San Martino, Harish Tayyar Madabushi, Ritesh Kumar, Elisa Sartori
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
1763–1768
Language:
URL:
https://aclanthology.org/2023.semeval-1.244
DOI:
10.18653/v1/2023.semeval-1.244
Bibkey:
Cite (ACL):
Fidan Can. 2023. Tübingen at SemEval-2023 Task 4: What Can Stance Tell? A Computational Study on Detecting Human Values behind Arguments. In Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023), pages 1763–1768, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Tübingen at SemEval-2023 Task 4: What Can Stance Tell? A Computational Study on Detecting Human Values behind Arguments (Can, SemEval 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.semeval-1.244.pdf