Inducing Target-Specific Latent Structures for Aspect Sentiment Classification

Chenhua Chen, Zhiyang Teng, Yue Zhang


Abstract
Aspect-level sentiment analysis aims to recognize the sentiment polarity of an aspect or a target in a comment. Recently, graph convolutional networks based on linguistic dependency trees have been studied for this task. However, the dependency parsing accuracy of commercial product comments or tweets might be unsatisfactory. To tackle this problem, we associate linguistic dependency trees with automatically induced aspectspecific graphs. We propose gating mechanisms to dynamically combine information from word dependency graphs and latent graphs which are learned by self-attention networks. Our model can complement supervised syntactic features with latent semantic dependencies. Experimental results on five benchmarks show the effectiveness of our proposed latent models, giving significantly better results than models without using latent graphs.
Anthology ID:
2020.emnlp-main.451
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5596–5607
Language:
URL:
https://aclanthology.org/2020.emnlp-main.451
DOI:
10.18653/v1/2020.emnlp-main.451
Bibkey:
Cite (ACL):
Chenhua Chen, Zhiyang Teng, and Yue Zhang. 2020. Inducing Target-Specific Latent Structures for Aspect Sentiment Classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5596–5607, Online. Association for Computational Linguistics.
Cite (Informal):
Inducing Target-Specific Latent Structures for Aspect Sentiment Classification (Chen et al., EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.451.pdf
Video:
 https://slideslive.com/38938882