Exploring Text Representations for Generative Temporal Relation Extraction

Dmitriy Dligach, Steven Bethard, Timothy Miller, Guergana Savova


Abstract
Sequence-to-sequence models are appealing because they allow both encoder and decoder to be shared across many tasks by formulating those tasks as text-to-text problems. Despite recently reported successes of such models, we find that engineering input/output representations for such text-to-text models is challenging. On the Clinical TempEval 2016 relation extraction task, the most natural choice of output representations, where relations are spelled out in simple predicate logic statements, did not lead to good performance. We explore a variety of input/output representations, with the most successful prompting one event at a time, and achieving results competitive with standard pairwise temporal relation extraction systems.
Anthology ID:
2022.clinicalnlp-1.12
Volume:
Proceedings of the 4th Clinical Natural Language Processing Workshop
Month:
July
Year:
2022
Address:
Seattle, WA
Editors:
Tristan Naumann, Steven Bethard, Kirk Roberts, Anna Rumshisky
Venue:
ClinicalNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
109–113
Language:
URL:
https://aclanthology.org/2022.clinicalnlp-1.12
DOI:
10.18653/v1/2022.clinicalnlp-1.12
Bibkey:
Cite (ACL):
Dmitriy Dligach, Steven Bethard, Timothy Miller, and Guergana Savova. 2022. Exploring Text Representations for Generative Temporal Relation Extraction. In Proceedings of the 4th Clinical Natural Language Processing Workshop, pages 109–113, Seattle, WA. Association for Computational Linguistics.
Cite (Informal):
Exploring Text Representations for Generative Temporal Relation Extraction (Dligach et al., ClinicalNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.clinicalnlp-1.12.pdf
Video:
 https://aclanthology.org/2022.clinicalnlp-1.12.mp4