Learning Structured Text Representations

Yang Liu, Mirella Lapata


Abstract
In this paper, we focus on learning structure-aware document representations from data without recourse to a discourse parser or additional annotations. Drawing inspiration from recent efforts to empower neural networks with a structural bias (Cheng et al., 2016; Kim et al., 2017), we propose a model that can encode a document while automatically inducing rich structural dependencies. Specifically, we embed a differentiable non-projective parsing algorithm into a neural model and use attention mechanisms to incorporate the structural biases. Experimental evaluations across different tasks and datasets show that the proposed model achieves state-of-the-art results on document modeling tasks while inducing intermediate structures which are both interpretable and meaningful.
Anthology ID:
Q18-1005
Volume:
Transactions of the Association for Computational Linguistics, Volume 6
Month:
Year:
2018
Address:
Cambridge, MA
Editors:
Lillian Lee, Mark Johnson, Kristina Toutanova, Brian Roark
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
63–75
Language:
URL:
https://aclanthology.org/Q18-1005
DOI:
10.1162/tacl_a_00005
Bibkey:
Cite (ACL):
Yang Liu and Mirella Lapata. 2018. Learning Structured Text Representations. Transactions of the Association for Computational Linguistics, 6:63–75.
Cite (Informal):
Learning Structured Text Representations (Liu & Lapata, TACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/Q18-1005.pdf
Video:
 https://aclanthology.org/Q18-1005.mp4
Code
 nlpyang/structured +  additional community code
Data
SNLI