Injecting Relational Structural Representation in Neural Networks for Question Similarity

Antonio Uva, Daniele Bonadiman, Alessandro Moschitti


Abstract
Effectively using full syntactic parsing information in Neural Networks (NNs) for solving relational tasks, e.g., question similarity, is still an open problem. In this paper, we propose to inject structural representations in NNs by (i) learning a model with Tree Kernels (TKs) on relatively few pairs of questions (few thousands) as gold standard (GS) training data is typically scarce, (ii) predicting labels on a very large corpus of question pairs, and (iii) pre-training NNs on such large corpus. The results on Quora and SemEval question similarity datasets show that NNs using our approach can learn more accurate models, especially after fine tuning on GS.
Anthology ID:
P18-2046
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
285–291
Language:
URL:
https://aclanthology.org/P18-2046
DOI:
10.18653/v1/P18-2046
Bibkey:
Cite (ACL):
Antonio Uva, Daniele Bonadiman, and Alessandro Moschitti. 2018. Injecting Relational Structural Representation in Neural Networks for Question Similarity. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 285–291, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Injecting Relational Structural Representation in Neural Networks for Question Similarity (Uva et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-2046.pdf
Poster:
 P18-2046.Poster.pdf
Code
 aseveryn/deep-qa