Explainable Unsupervised Argument Similarity Rating with Abstract Meaning Representation and Conclusion Generation

Juri Opitz, Philipp Heinisch, Philipp Wiesenbach, Philipp Cimiano, Anette Frank


Abstract
When assessing the similarity of arguments, researchers typically use approaches that do not provide interpretable evidence or justifications for their ratings. Hence, the features that determine argument similarity remain elusive. We address this issue by introducing novel argument similarity metrics that aim at high performance and explainability. We show that Abstract Meaning Representation (AMR) graphs can be useful for representing arguments, and that novel AMR graph metrics can offer explanations for argument similarity ratings. We start from the hypothesis that similar premises often lead to similar conclusions—and extend an approach for AMR-based argument similarity rating by estimating, in addition, the similarity of conclusions that we automatically infer from the arguments used as premises. We show that AMR similarity metrics make argument similarity judgements more interpretable and may even support argument quality judgements. Our approach provides significant performance improvements over strong baselines in a fully unsupervised setting. Finally, we make first steps to address the problem of reference-less evaluation of argumentative conclusion generations.
Anthology ID:
2021.argmining-1.3
Volume:
Proceedings of the 8th Workshop on Argument Mining
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Venues:
ArgMining | EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
24–35
Language:
URL:
https://aclanthology.org/2021.argmining-1.3
DOI:
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.argmining-1.3.pdf
Code
 heidelberg-nlp/amr-argument-sim
Data
ConceptNet