A Corpus and Evaluation for Predicting Semi-Structured Human Annotations

Andreas Marfurt, Ashley Thornton, David Sylvan, Lonneke van der Plas, James Henderson


Abstract
A wide variety of tasks have been framed as text-to-text tasks to allow processing by sequence-to-sequence models. We propose a new task of generating a semi-structured interpretation of a source document. The interpretation is semi-structured in that it contains mandatory and optional fields with free-text information. This structure is surfaced by human annotations, which we standardize and convert to text format. We then propose an evaluation technique that is generally applicable to any such semi-structured annotation, called equivalence classes evaluation. The evaluation technique is efficient and scalable; it creates a large number of evaluation instances from a comparably cheap clustering of the free-text information by domain experts. For our task, we release a dataset about the monetary policy of the Federal Reserve. On this corpus, our evaluation shows larger differences between pretrained models than standard text generation metrics.
Anthology ID:
2022.gem-1.22
Volume:
Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Hybrid)
Editors:
Antoine Bosselut, Khyathi Chandu, Kaustubh Dhole, Varun Gangal, Sebastian Gehrmann, Yacine Jernite, Jekaterina Novikova, Laura Perez-Beltrachini
Venue:
GEM
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
262–275
Language:
URL:
https://aclanthology.org/2022.gem-1.22
DOI:
10.18653/v1/2022.gem-1.22
Bibkey:
Cite (ACL):
Andreas Marfurt, Ashley Thornton, David Sylvan, Lonneke van der Plas, and James Henderson. 2022. A Corpus and Evaluation for Predicting Semi-Structured Human Annotations. In Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM), pages 262–275, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Cite (Informal):
A Corpus and Evaluation for Predicting Semi-Structured Human Annotations (Marfurt et al., GEM 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.gem-1.22.pdf
Video:
 https://aclanthology.org/2022.gem-1.22.mp4