ARIES: A General Benchmark for Argument Relation Identification

Debela Gemechu, Ramon Ruiz-Dolz, Chris Reed


Abstract
Measuring advances in argument mining is one of the main challenges in the area. Different theories of argument, heterogeneous annotations, and a varied set of argumentation domains make it difficult to contextualise and understand the results reported in different work from a general perspective. In this paper, we present ARIES, a general benchmark for Argument Relation Identification aimed at providing with a standard evaluation for argument mining research. ARIES covers the three different language modelling approaches: sequence and token modelling, and sequence-to-sequence-to-sequence alignment, together with the three main Transformer-based model architectures: encoder-only, decoder-only, and encoder-decoder. Furthermore, the benchmark consists of eight different argument mining datasets, covering the most common argumentation domains, and standardised with the same annotation structures. This paper provides a first comprehensive and comparative set of results in argument mining across a broad range of configurations to compare with, both advancing the state-of-the-art, and establishing a standard way to measure future advances in the area. Across varied task setups and architectures, our experiments reveal consistent challenges in cross-dataset evaluation, with notably poor results. Given the models’ struggle to acquire transferable skills, the task remains challenging, opening avenues for future research.
Anthology ID:
2024.argmining-1.1
Volume:
Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Yamen Ajjour, Roy Bar-Haim, Roxanne El Baff, Zhexiong Liu, Gabriella Skitalinskaya
Venue:
ArgMining
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–14
Language:
URL:
https://aclanthology.org/2024.argmining-1.1
DOI:
10.18653/v1/2024.argmining-1.1
Bibkey:
Cite (ACL):
Debela Gemechu, Ramon Ruiz-Dolz, and Chris Reed. 2024. ARIES: A General Benchmark for Argument Relation Identification. In Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024), pages 1–14, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
ARIES: A General Benchmark for Argument Relation Identification (Gemechu et al., ArgMining 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.argmining-1.1.pdf