Detecting Argumentative Fallacies in the Wild: Problems and Limitations of Large Language Models

Ramon Ruiz-Dolz, John Lawrence


Abstract
Previous work on the automatic identification of fallacies in natural language text has typically approached the problem in constrained experimental setups that make it difficult to understand the applicability and usefulness of the proposals in the real world. In this paper, we present the first analysis of the limitations that these data-driven approaches could show in real situations. For that purpose, we first create a validation corpus consisting of natural language argumentation schemes. Second, we provide new empirical results to the emerging task of identifying fallacies in natural language text. Third, we analyse the errors observed outside of the testing data domains considering the new validation corpus. Finally, we point out some important limitations observed in our analysis that should be taken into account in future research in this topic. Specifically, if we want to deploy these systems in the Wild.
Anthology ID:
2023.argmining-1.1
Volume:
Proceedings of the 10th Workshop on Argument Mining
Month:
December
Year:
2023
Address:
Singapore
Editors:
Milad Alshomary, Chung-Chi Chen, Smaranda Muresan, Joonsuk Park, Julia Romberg
Venues:
ArgMining | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–10
Language:
URL:
https://aclanthology.org/2023.argmining-1.1
DOI:
10.18653/v1/2023.argmining-1.1
Bibkey:
Cite (ACL):
Ramon Ruiz-Dolz and John Lawrence. 2023. Detecting Argumentative Fallacies in the Wild: Problems and Limitations of Large Language Models. In Proceedings of the 10th Workshop on Argument Mining, pages 1–10, Singapore. Association for Computational Linguistics.
Cite (Informal):
Detecting Argumentative Fallacies in the Wild: Problems and Limitations of Large Language Models (Ruiz-Dolz & Lawrence, ArgMining-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.argmining-1.1.pdf