Using the Past Knowledge to Improve Sentiment Classification

Qi Qin, Wenpeng Hu, Bing Liu


Abstract
This paper studies sentiment classification in the lifelong learning setting that incrementally learns a sequence of sentiment classification tasks. It proposes a new lifelong learning model (called L2PG) that can retain and selectively transfer the knowledge learned in the past to help learn the new task. A key innovation of this proposed model is a novel parameter-gate (p-gate) mechanism that regulates the flow or transfer of the previously learned knowledge to the new task. Specifically, it can selectively use the network parameters (which represent the retained knowledge gained from the previous tasks) to assist the learning of the new task t. Knowledge distillation is also employed in the process to preserve the past knowledge by approximating the network output at the state when task t-1 was learned. Experimental results show that L2PG outperforms strong baselines, including even multiple task learning.
Anthology ID:
2020.findings-emnlp.101
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1124–1133
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.101
DOI:
10.18653/v1/2020.findings-emnlp.101
Bibkey:
Cite (ACL):
Qi Qin, Wenpeng Hu, and Bing Liu. 2020. Using the Past Knowledge to Improve Sentiment Classification. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1124–1133, Online. Association for Computational Linguistics.
Cite (Informal):
Using the Past Knowledge to Improve Sentiment Classification (Qin et al., Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.101.pdf