Revisiting the Importance of Encoding Logic Rules in Sentiment Classification

Kalpesh Krishna, Preethi Jyothi, Mohit Iyyer


Abstract
We analyze the performance of different sentiment classification models on syntactically complex inputs like A-but-B sentences. The first contribution of this analysis addresses reproducible research: to meaningfully compare different models, their accuracies must be averaged over far more random seeds than what has traditionally been reported. With proper averaging in place, we notice that the distillation model described in Hu et al. (2016), which incorporates explicit logic rules for sentiment classification, is ineffective. In contrast, using contextualized ELMo embeddings (Peters et al., 2018a) instead of logic rules yields significantly better performance. Additionally, we provide analysis and visualizations that demonstrate ELMo’s ability to implicitly learn logic rules. Finally, a crowdsourced analysis reveals how ELMo outperforms baseline models even on sentences with ambiguous sentiment labels.
Anthology ID:
D18-1505
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4743–4751
Language:
URL:
https://aclanthology.org/D18-1505
DOI:
10.18653/v1/D18-1505
Bibkey:
Cite (ACL):
Kalpesh Krishna, Preethi Jyothi, and Mohit Iyyer. 2018. Revisiting the Importance of Encoding Logic Rules in Sentiment Classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4743–4751, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Revisiting the Importance of Encoding Logic Rules in Sentiment Classification (Krishna et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1505.pdf
Attachment:
 D18-1505.Attachment.zip
Video:
 https://vimeo.com/306136412
Code
 martiansideofthemoon/logic-rules-sentiment
Data
SST