Beyond the Numbers: Transparency in Relation Extraction Benchmark Creation and Leaderboards

Varvara Arzt, Allan Hanbury


Abstract
This paper investigates the transparency in the creation of benchmarks and the use of leaderboards for measuring progress in NLP, with a focus on the relation extraction (RE) task. Existing RE benchmarks often suffer from insufficient documentation, lacking crucial details such as data sources, inter-annotator agreement, the algorithms used for the selection of instances for datasets, and information on potential biases like dataset imbalance. Progress in RE is frequently measured by leaderboards that rank systems based on evaluation methods, typically limited to aggregate metrics like F1-score. However, the absence of detailed performance analysis beyond these metrics can obscure the true generalisation capabilities of models. Our analysis reveals that widely used RE benchmarks, such as TACRED and NYT, tend to be highly imbalanced and contain noisy labels. Moreover, the lack of class-based performance metrics fails to accurately reflect model performance across datasets with a large number of relation types. These limitations should be carefully considered when reporting progress in RE. While our discussion centers on the transparency of RE benchmarks and leaderboards, the observations we discuss are broadly applicable to other NLP tasks as well. Rather than undermining the significance and value of existing RE benchmarks and the development of new models, this paper advocates for improved documentation and more rigorous evaluation to advance the field.
Anthology ID:
2024.genbench-1.8
Volume:
Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Dieuwke Hupkes, Verna Dankers, Khuyagbaatar Batsuren, Amirhossein Kazemnejad, Christos Christodoulopoulos, Mario Giulianelli, Ryan Cotterell
Venue:
GenBench
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
120–130
Language:
URL:
https://aclanthology.org/2024.genbench-1.8
DOI:
Bibkey:
Cite (ACL):
Varvara Arzt and Allan Hanbury. 2024. Beyond the Numbers: Transparency in Relation Extraction Benchmark Creation and Leaderboards. In Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP, pages 120–130, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Beyond the Numbers: Transparency in Relation Extraction Benchmark Creation and Leaderboards (Arzt & Hanbury, GenBench 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.genbench-1.8.pdf