Latent Tree Learning with Ordered Neurons: What Parses Does It Produce?

Yian Zhang


Abstract
Recent latent tree learning models can learn constituency parsing without any exposure to human-annotated tree structures. One such model is ON-LSTM (Shen et al., 2019), which is trained on language modelling and has near-state-of-the-art performance on unsupervised parsing. In order to better understand the performance and consistency of the model as well as how the parses it generates are different from gold-standard PTB parses, we replicate the model with different restarts and examine their parses. We find that (1) the model has reasonably consistent parsing behaviors across different restarts, (2) the model struggles with the internal structures of complex noun phrases, (3) the model has a tendency to overestimate the height of the split points right before verbs. We speculate that both problems could potentially be solved by adopting a different training task other than unidirectional language modelling.
Anthology ID:
2020.blackboxnlp-1.11
Volume:
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2020
Address:
Online
Editors:
Afra Alishahi, Yonatan Belinkov, Grzegorz Chrupała, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
119–125
Language:
URL:
https://aclanthology.org/2020.blackboxnlp-1.11
DOI:
10.18653/v1/2020.blackboxnlp-1.11
Bibkey:
Cite (ACL):
Yian Zhang. 2020. Latent Tree Learning with Ordered Neurons: What Parses Does It Produce?. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 119–125, Online. Association for Computational Linguistics.
Cite (Informal):
Latent Tree Learning with Ordered Neurons: What Parses Does It Produce? (Zhang, BlackboxNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.blackboxnlp-1.11.pdf
Code
 YianZhang/ONLSTM-analysis