Syntactically Meaningful and Transferable Recursive Neural Networks for Aspect and Opinion Extraction

Wenya Wang, Sinno Jialin Pan


Abstract
In fine-grained opinion mining, extracting aspect terms (a.k.a. opinion targets) and opinion terms (a.k.a. opinion expressions) from user-generated texts is the most fundamental task in order to generate structured opinion summarization. Existing studies have shown that the syntactic relations between aspect and opinion words play an important role for aspect and opinion terms extraction. However, most of the works either relied on predefined rules or separated relation mining with feature learning. Moreover, these works only focused on single-domain extraction, which failed to adapt well to other domains of interest where only unlabeled data are available. In real-world scenarios, annotated resources are extremely scarce for many domains, motivating knowledge transfer strategies from labeled source domain(s) to any unlabeled target domain. We observe that syntactic relations among target words to be extracted are not only crucial for single-domain extraction, but also serve as invariant “pivot” information to bridge the gap between different domains. In this article, we explore the constructions of recursive neural networks based on the dependency tree of each sentence for associating syntactic structure with feature learning. Furthermore, we construct transferable recursive neural networks to automatically learn the domain-invariant fine-grained interactions among aspect words and opinion words. The transferability is built on an auxiliary task and a conditional domain adversarial network to reduce domain distribution difference in the hidden spaces effectively in word level through syntactic relations. Specifically, the auxiliary task builds structural correspondences across domains by predicting the dependency relation for each path of the dependency tree in the recursive neural network. The conditional domain adversarial network helps to learn domain-invariant hidden representation for each word conditioned on the syntactic structure. In the end, we integrate the recursive neural network with a sequence labeling classifier on top that models contextual influence in the final predictions. Extensive experiments and analysis are conducted to demonstrate the effectiveness of the proposed model and each component on three benchmark data sets.
Anthology ID:
J19-4004
Volume:
Computational Linguistics, Volume 45, Issue 4 - December 2019
Month:
December
Year:
2019
Address:
Cambridge, MA
Venue:
CL
SIG:
Publisher:
MIT Press
Note:
Pages:
705–736
Language:
URL:
https://aclanthology.org/J19-4004
DOI:
10.1162/coli_a_00362
Bibkey:
Cite (ACL):
Wenya Wang and Sinno Jialin Pan. 2019. Syntactically Meaningful and Transferable Recursive Neural Networks for Aspect and Opinion Extraction. Computational Linguistics, 45(4):705–736.
Cite (Informal):
Syntactically Meaningful and Transferable Recursive Neural Networks for Aspect and Opinion Extraction (Wang & Pan, CL 2019)
Copy Citation:
PDF:
https://aclanthology.org/J19-4004.pdf