Rositsa V Ivanova


2025

pdf bib
The Shift from Logic to Dialectic in Argumentation Theory: Implications for Computational Argument Quality Assessment
Rositsa V Ivanova | Reto Gubelmann
Proceedings of the 31st International Conference on Computational Linguistics

In the field of computational argument quality assessment, logic and dialectic are essential dimensions used to measure the quality of argumentative texts. Both of them have found their way into the field due to their importance to argumentation theory. We trace the development of core logical concepts of validity and soundness from their first use in argumentation theory to their understanding in state-of-the-art research. We show how, in the course of this development, dialectical considerations have taken center stage, at the cost of the logical perspective. Then, we take a closer look at the quality dimensions used in the field of computational argument quality assessment. Based on an analysis of prior empirical work in this field, we show how methodological considerations from argument theory can benefit state-of-the-art methods in computational argument quality assessment. We propose an even clearer separation between the two quality dimensions not only in regards to their definitions, but also in regards to the granularity at which the argumentative text is being annotated and assessed.

2024

pdf bib
Let’s discuss! Quality Dimensions and Annotated Datasets for Computational Argument Quality Assessment
Rositsa V Ivanova | Thomas Huber | Christina Niklaus
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Research in the computational assessment of Argumentation Quality has gained popularity over the last ten years. Various quality dimensions have been explored through the creation of domain-specific datasets and assessment methods. We survey the related literature (211 publications and 32 datasets), while addressing potential overlaps and blurry boundaries to related domains. This paper provides a representative overview of the state of the art in Computational Argument Quality Assessment with a focus on quality dimensions and annotated datasets. The aim of the survey is to identify research gaps and to aid future discussions and work in the domain.