What can we learn from Semantic Tagging?

Mostafa Abdou, Artur Kulmizev, Vinit Ravishankar, Lasha Abzianidze, Johan Bos


Abstract
We investigate the effects of multi-task learning using the recently introduced task of semantic tagging. We employ semantic tagging as an auxiliary task for three different NLP tasks: part-of-speech tagging, Universal Dependency parsing, and Natural Language Inference. We compare full neural network sharing, partial neural network sharing, and what we term the learning what to share setting where negative transfer between tasks is less likely. Our findings show considerable improvements for all tasks, particularly in the learning what to share setting which shows consistent gains across all tasks.
Anthology ID:
D18-1526
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4881–4889
Language:
URL:
https://aclanthology.org/D18-1526
DOI:
10.18653/v1/D18-1526
Bibkey:
Cite (ACL):
Mostafa Abdou, Artur Kulmizev, Vinit Ravishankar, Lasha Abzianidze, and Johan Bos. 2018. What can we learn from Semantic Tagging?. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4881–4889, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
What can we learn from Semantic Tagging? (Abdou et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1526.pdf
Attachment:
 D18-1526.Attachment.pdf
Data
SNLI