Assessing the Sufficiency of Arguments through Conclusion Generation

Timon Gurcke, Milad Alshomary, Henning Wachsmuth


Abstract
The premises of an argument give evidence or other reasons to support a conclusion. However, the amount of support required depends on the generality of a conclusion, the nature of the individual premises, and similar. An argument whose premises make its conclusion rationally worthy to be drawn is called sufficient in argument quality research. Previous work tackled sufficiency assessment as a standard text classification problem, not modeling the inherent relation of premises and conclusion. In this paper, we hypothesize that the conclusion of a sufficient argument can be generated from its premises. To study this hypothesis, we explore the potential of assessing sufficiency based on the output of large-scale pre-trained language models. Our best model variant achieves an F1-score of .885, outperforming the previous state-of-the-art and being on par with human experts. While manual evaluation reveals the quality of the generated conclusions, their impact remains low ultimately.
Anthology ID:
2021.argmining-1.7
Volume:
Proceedings of the 8th Workshop on Argument Mining
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Venues:
ArgMining | EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
67–77
Language:
URL:
https://aclanthology.org/2021.argmining-1.7
DOI:
10.18653/v1/2021.argmining-1.7
Bibkey:
Cite (ACL):
Timon Gurcke, Milad Alshomary, and Henning Wachsmuth. 2021. Assessing the Sufficiency of Arguments through Conclusion Generation. In Proceedings of the 8th Workshop on Argument Mining, pages 67–77, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Assessing the Sufficiency of Arguments through Conclusion Generation (Gurcke et al., ArgMining 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.argmining-1.7.pdf
Code
 webis-de/argmining-21