Can Language Models Recognize Convincing Arguments?

Paula Rescala, Manoel Ribeiro, Tiancheng Hu, Robert West


Abstract
The capabilities of large language models (LLMs) have raised concerns about their potential to create and propagate convincing narratives. Here, we study their performance in detecting convincing arguments to gain insights into LLMs’ persuasive capabilities without directly engaging in experimentation with humans. We extend a dataset by Durmus and Cardie (2018) with debates, votes, and user traits and propose tasks measuring LLMs’ ability to (1) distinguish between strong and weak arguments, (2) predict stances based on beliefs and demographic characteristics, and (3) determine the appeal of an argument to an individual based on their traits. We show that LLMs perform on par with humans in these tasks and that combining predictions from different LLMs yields significant performance gains, surpassing human performance. The data and code released with this paper contribute to the crucial effort of continuously evaluating and monitoring LLMs’ capabilities and potential impact. (https://go.epfl.ch/persuasion-llm)
Anthology ID:
2024.findings-emnlp.515
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8826–8837
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.515
DOI:
Bibkey:
Cite (ACL):
Paula Rescala, Manoel Ribeiro, Tiancheng Hu, and Robert West. 2024. Can Language Models Recognize Convincing Arguments?. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 8826–8837, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Can Language Models Recognize Convincing Arguments? (Rescala et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.515.pdf