Exploring Sequence-to-Sequence Learning in Aspect Term Extraction

Dehong Ma, Sujian Li, Fangzhao Wu, Xing Xie, Houfeng Wang


Abstract
Aspect term extraction (ATE) aims at identifying all aspect terms in a sentence and is usually modeled as a sequence labeling problem. However, sequence labeling based methods cannot make full use of the overall meaning of the whole sentence and have the limitation in processing dependencies between labels. To tackle these problems, we first explore to formalize ATE as a sequence-to-sequence (Seq2Seq) learning task where the source sequence and target sequence are composed of words and labels respectively. At the same time, to make Seq2Seq learning suit to ATE where labels correspond to words one by one, we design the gated unit networks to incorporate corresponding word representation into the decoder, and position-aware attention to pay more attention to the adjacent words of a target word. The experimental results on two datasets show that Seq2Seq learning is effective in ATE accompanied with our proposed gated unit networks and position-aware attention mechanism.
Anthology ID:
P19-1344
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3538–3547
Language:
URL:
https://aclanthology.org/P19-1344
DOI:
10.18653/v1/P19-1344
Bibkey:
Cite (ACL):
Dehong Ma, Sujian Li, Fangzhao Wu, Xing Xie, and Houfeng Wang. 2019. Exploring Sequence-to-Sequence Learning in Aspect Term Extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3538–3547, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Exploring Sequence-to-Sequence Learning in Aspect Term Extraction (Ma et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1344.pdf
Video:
 https://aclanthology.org/P19-1344.mp4
Data
SemEval-2014 Task-4