A Unified View of Evaluation Metrics for Structured Prediction

Yunmo Chen, William Gantt, Tongfei Chen, Aaron White, Benjamin Van Durme


Abstract
We present a conceptual framework that unifies a variety of evaluation metrics for different structured prediction tasks (e.g. event and relation extraction, syntactic and semantic parsing). Our framework requires representing the outputs of these tasks as objects of certain data types, and derives metrics through matching of common substructures, possibly followed by normalization. We demonstrate how commonly used metrics for a number of tasks can be succinctly expressed by this framework, and show that new metrics can be naturally derived in a bottom-up way based on an output structure. We release a library that enables this derivation to create new metrics. Finally, we consider how specific characteristics of tasks motivate metric design decisions, and suggest possible modifications to existing metrics in line with those motivations.
Anthology ID:
2023.emnlp-main.795
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12868–12882
Language:
URL:
https://aclanthology.org/2023.emnlp-main.795
DOI:
10.18653/v1/2023.emnlp-main.795
Bibkey:
Cite (ACL):
Yunmo Chen, William Gantt, Tongfei Chen, Aaron White, and Benjamin Van Durme. 2023. A Unified View of Evaluation Metrics for Structured Prediction. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12868–12882, Singapore. Association for Computational Linguistics.
Cite (Informal):
A Unified View of Evaluation Metrics for Structured Prediction (Chen et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.795.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.795.mp4