What’s Hard in English RST Parsing? Predictive Models for Error Analysis

Yang Janet Liu, Tatsuya Aoyama, Amir Zeldes


Abstract
Despite recent advances in Natural Language Processing (NLP), hierarchical discourse parsing in the framework of Rhetorical Structure Theory remains challenging, and our understanding of the reasons for this are as yet limited. In this paper, we examine and model some of the factors associated with parsing difficulties in previous work: the existence of implicit discourse relations, challenges in identifying long-distance relations, out-of-vocabulary items, and more. In order to assess the relative importance of these variables, we also release two annotated English test-sets with explicit correct and distracting discourse markers associated with gold standard RST relations. Our results show that as in shallow discourse parsing, the explicit/implicit distinction plays a role, but that long-distance dependencies are the main challenge, while lack of lexical overlap is less of a problem, at least for in-domain parsing. Our final model is able to predict where errors will occur with an accuracy of 76.3% for the bottom-up parser and 76.6% for the top-down parser.
Anthology ID:
2023.sigdial-1.3
Volume:
Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Month:
September
Year:
2023
Address:
Prague, Czechia
Editors:
Svetlana Stoyanchev, Shafiq Joty, David Schlangen, Ondrej Dusek, Casey Kennington, Malihe Alikhani
Venue:
SIGDIAL
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
31–42
Language:
URL:
https://aclanthology.org/2023.sigdial-1.3
DOI:
10.18653/v1/2023.sigdial-1.3
Bibkey:
Cite (ACL):
Yang Janet Liu, Tatsuya Aoyama, and Amir Zeldes. 2023. What’s Hard in English RST Parsing? Predictive Models for Error Analysis. In Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 31–42, Prague, Czechia. Association for Computational Linguistics.
Cite (Informal):
What’s Hard in English RST Parsing? Predictive Models for Error Analysis (Liu et al., SIGDIAL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.sigdial-1.3.pdf