Comparing Span Extraction Methods for Semantic Role Labeling

Zhisong Zhang, Emma Strubell, Eduard Hovy


Abstract
In this work, we empirically compare span extraction methods for the task of semantic role labeling (SRL). While recent progress incorporating pre-trained contextualized representations into neural encoders has greatly improved SRL F1 performance on popular benchmarks, the potential costs and benefits of structured decoding in these models have become less clear. With extensive experiments on PropBank SRL datasets, we find that more structured decoding methods outperform BIO-tagging when using static (word type) embeddings across all experimental settings. However, when used in conjunction with pre-trained contextualized word representations, the benefits are diminished. We also experiment in cross-genre and cross-lingual settings and find similar trends. We further perform speed comparisons and provide analysis on the accuracy-efficiency trade-offs among different decoding methods.
Anthology ID:
2021.spnlp-1.8
Volume:
Proceedings of the 5th Workshop on Structured Prediction for NLP (SPNLP 2021)
Month:
August
Year:
2021
Address:
Online
Venues:
ACL | IJCNLP | spnlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
67–77
Language:
URL:
https://aclanthology.org/2021.spnlp-1.8
DOI:
10.18653/v1/2021.spnlp-1.8
Bibkey:
Cite (ACL):
Zhisong Zhang, Emma Strubell, and Eduard Hovy. 2021. Comparing Span Extraction Methods for Semantic Role Labeling. In Proceedings of the 5th Workshop on Structured Prediction for NLP (SPNLP 2021), pages 67–77, Online. Association for Computational Linguistics.
Cite (Informal):
Comparing Span Extraction Methods for Semantic Role Labeling (Zhang et al., spnlp 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.spnlp-1.8.pdf
Code
 zzsfornlp/zmsp