Cross-Domain Sentiment Classification using Semantic Representation

Shichen Li, Zhongqing Wang, Xiaotong Jiang, Guodong Zhou


Abstract
Previous studies on cross-domain sentiment classification depend on the pivot features or utilize the target data for representation learning, which ignore the semantic relevance between different domains. To this end, we exploit Abstract Meaning Representation (AMR) to help with cross-domain sentiment classification. Compared with the textual input, AMR reduces data sparsity and explicitly provides core semantic knowledge and correlations between different domains. In particular, we develop an algorithm to construct a sentiment-driven semantic graph from sentence-level AMRs. We further design two strategies to linearize the semantic graph and propose a text-graph interaction model to fuse the text and semantic graph representations for cross-domain sentiment classification. Empirical studies show the effectiveness of our proposed model over several strong baselines. The results also indicate the importance of the proposed sentiment-driven semantic graph for cross-domain sentiment classification.
Anthology ID:
2022.findings-emnlp.22
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
289–299
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.22
DOI:
10.18653/v1/2022.findings-emnlp.22
Bibkey:
Cite (ACL):
Shichen Li, Zhongqing Wang, Xiaotong Jiang, and Guodong Zhou. 2022. Cross-Domain Sentiment Classification using Semantic Representation. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 289–299, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Cross-Domain Sentiment Classification using Semantic Representation (Li et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.22.pdf