Do machines dream of artificial agreement?

Anna Lindahl


Abstract
In this paper the (assumed) inconsistency between F1-scores and annotator agreement measures is discussed. This is exemplified in five corpora from the field of argumentation mining. High agreement is important in most annotation tasks and also often deemed important for an annotated dataset to be useful for machine learning. However, depending on the annotation task, achieving high agreement is not always easy. This is especially true in the field of argumentation mining, because argumentation can be complex as well as implicit. There are also many different models of argumentation, which can be seen in the increasing number of argumentation annotated corpora. Many of these reach moderate agreement but are still used in machine learning tasks, reaching high F1-score. In this paper we describe five corpora, in particular how they have been created and used, to see how they have handled disagreement. We find that agreement can be raised post-production, but that more discussion regarding evaluating and calculating agreement is needed. We conclude that standardisation of the models and the evaluation methods could help such discussions.
Anthology ID:
2022.isa-1.9
Volume:
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
Month:
June
Year:
2022
Address:
Marseille, France
Editor:
Harry Bunt
Venue:
ISA
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
71–75
Language:
URL:
https://aclanthology.org/2022.isa-1.9
DOI:
Bibkey:
Cite (ACL):
Anna Lindahl. 2022. Do machines dream of artificial agreement?. In Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022, pages 71–75, Marseille, France. European Language Resources Association.
Cite (Informal):
Do machines dream of artificial agreement? (Lindahl, ISA 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.isa-1.9.pdf