%0 Conference Proceedings %T Why Can’t Discourse Parsing Generalize? A Thorough Investigation of the Impact of Data Diversity %A Liu, Yang Janet %A Zeldes, Amir %Y Vlachos, Andreas %Y Augenstein, Isabelle %S Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics %D 2023 %8 May %I Association for Computational Linguistics %C Dubrovnik, Croatia %F liu-zeldes-2023-cant %X Recent advances in discourse parsing performance create the impression that, as in other NLP tasks, performance for high-resource languages such as English is finally becoming reliable. In this paper we demonstrate that this is not the case, and thoroughly investigate the impact of data diversity on RST parsing stability. We show that state-of-the-art architectures trained on the standard English newswire benchmark do not generalize well, even within the news domain. Using the two largest RST corpora of English with text from multiple genres, we quantify the impact of genre diversity in training data for achieving generalization to text types unseen during training. Our results show that a heterogeneous training regime is critical for stable and generalizable models, across parser architectures. We also provide error analyses of model outputs and out-of-domain performance. To our knowledge, this study is the first to fully evaluate cross-corpus RST parsing generalizability on complete trees, examine between-genre degradation within an RST corpus, and investigate the impact of genre diversity in training data composition. %R 10.18653/v1/2023.eacl-main.227 %U https://aclanthology.org/2023.eacl-main.227 %U https://doi.org/10.18653/v1/2023.eacl-main.227 %P 3112-3130