Learning Domain Representation for Multi-Domain Sentiment Classification

Qi Liu, Yue Zhang, Jiangming Liu


Abstract
Training data for sentiment analysis are abundant in multiple domains, yet scarce for other domains. It is useful to leveraging data available for all existing domains to enhance performance on different domains. We investigate this problem by learning domain-specific representations of input sentences using neural network. In particular, a descriptor vector is learned for representing each domain, which is used to map adversarially trained domain-general Bi-LSTM input representations into domain-specific representations. Based on this model, we further expand the input representation with exemplary domain knowledge, collected by attending over a memory network of domain training data. Results show that our model outperforms existing methods on multi-domain sentiment analysis significantly, giving the best accuracies on two different benchmarks.
Anthology ID:
N18-1050
Volume:
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
541–550
Language:
URL:
https://aclanthology.org/N18-1050
DOI:
10.18653/v1/N18-1050
Bibkey:
Cite (ACL):
Qi Liu, Yue Zhang, and Jiangming Liu. 2018. Learning Domain Representation for Multi-Domain Sentiment Classification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 541–550, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
Learning Domain Representation for Multi-Domain Sentiment Classification (Liu et al., NAACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/N18-1050.pdf