Large Language Models are Few-Shot Training Example Generators: A Case Study in Fallacy Recognition

Tariq Alhindi, Smaranda Muresan, Preslav Nakov


Abstract
Recognizing fallacies is crucial for ensuring the quality and validity of arguments across various domains. However, computational fallacy recognition faces challenges due to the diverse genres, domains, and types of fallacies found in datasets. This leads to a highly multi-class, and even multi-label, setup with substantial class imbalance. In this study, we aim to enhance existing models for fallacy recognition by incorporating additional context and by leveraging large language models to generate synthetic data, thus increasing the representation of the infrequent classes. We experiment with GPT3.5 to generate synthetic examples and we examine the impact of prompt settings for this. Moreover, we explore zero-shot and few-shot scenarios to evaluate the effectiveness of using the generated examples for training smaller models within a unified fallacy recognition framework. Furthermore, we analyze the overlap between the synthetic data and existing fallacy datasets. Finally, we investigate the usefulness of providing supplementary context for detecting fallacy types that need such context, e.g., diversion fallacies. Our evaluation results demonstrate consistent improvements across fallacy types, datasets, and generators. The code and the synthetic datasets are all publicly available.
Anthology ID:
2024.findings-acl.732
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12323–12334
Language:
URL:
https://aclanthology.org/2024.findings-acl.732
DOI:
Bibkey:
Cite (ACL):
Tariq Alhindi, Smaranda Muresan, and Preslav Nakov. 2024. Large Language Models are Few-Shot Training Example Generators: A Case Study in Fallacy Recognition. In Findings of the Association for Computational Linguistics ACL 2024, pages 12323–12334, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Large Language Models are Few-Shot Training Example Generators: A Case Study in Fallacy Recognition (Alhindi et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.732.pdf