Joint models for NLP

Yue Zhang


Abstract
Joint models have received much research attention in NLP, allowing relevant tasks to share common information while avoiding error propagation in multi-stage pepelines. Several main approaches have been taken by statistical joint modeling, while neural models allow parameter sharing and adversarial training. This tutorial reviews main approaches to joint modeling for both statistical and neural methods.
Anthology ID:
D18-3001
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
Month:
October-November
Year:
2018
Address:
Melbourne, Australia
Editors:
Mausam, Lu Wang
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
Language:
URL:
https://aclanthology.org/D18-3001
DOI:
Bibkey:
Cite (ACL):
Yue Zhang. 2018. Joint models for NLP. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Joint models for NLP (Zhang, EMNLP 2018)
Copy Citation: