Implicit n-grams Induced by Recurrence

Xiaobing Sun, Wei Lu


Abstract
Although self-attention based models such as Transformers have achieved remarkable successes on natural language processing (NLP)tasks, recent studies reveal that they have limitations on modeling sequential transformations (Hahn, 2020), which may promptre-examinations of recurrent neural networks (RNNs) that demonstrated impressive results on handling sequential data. Despite manyprior attempts to interpret RNNs, their internal mechanisms have not been fully understood, and the question on how exactly they capturesequential features remains largely unclear. In this work, we present a study that shows there actually exist some explainable componentsthat reside within the hidden states, which are reminiscent of the classical n-grams features. We evaluated such extracted explainable features from trained RNNs on downstream sentiment analysis tasks and found they could be used to model interesting linguistic phenomena such as negation and intensification. Furthermore, we examined the efficacy of using such n-gram components alone as encoders on tasks such as sentiment analysis and language modeling, revealing they could be playing important roles in contributing to the overall performance of RNNs. We hope our findings could add interpretability to RNN architectures, and also provide inspirations for proposing new architectures for sequential data.
Anthology ID:
2022.naacl-main.117
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1624–1639
Language:
URL:
https://aclanthology.org/2022.naacl-main.117
DOI:
10.18653/v1/2022.naacl-main.117
Bibkey:
Cite (ACL):
Xiaobing Sun and Wei Lu. 2022. Implicit n-grams Induced by Recurrence. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1624–1639, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Implicit n-grams Induced by Recurrence (Sun & Lu, NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.117.pdf
Video:
 https://aclanthology.org/2022.naacl-main.117.mp4
Code
 richardsun-voyager/inibr
Data
AG NewsIMDb Movie ReviewsSSTSST-2SST-5