Identifying beneficial task relations for multi-task learning in deep neural networks

Joachim Bingel, Anders Søgaard


Abstract
Multi-task learning (MTL) in deep neural networks for NLP has recently received increasing interest due to some compelling benefits, including its potential to efficiently regularize models and to reduce the need for labeled data. While it has brought significant improvements in a number of NLP tasks, mixed results have been reported, and little is known about the conditions under which MTL leads to gains in NLP. This paper sheds light on the specific task relations that can lead to gains from MTL models over single-task setups.
Anthology ID:
E17-2026
Volume:
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
Month:
April
Year:
2017
Address:
Valencia, Spain
Editors:
Mirella Lapata, Phil Blunsom, Alexander Koller
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
164–169
Language:
URL:
https://aclanthology.org/E17-2026
DOI:
Bibkey:
Cite (ACL):
Joachim Bingel and Anders Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 164–169, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
Identifying beneficial task relations for multi-task learning in deep neural networks (Bingel & Søgaard, EACL 2017)
Copy Citation:
PDF:
https://aclanthology.org/E17-2026.pdf
Code
 jbingel/eacl2017_mtl
Data
STREUSLE