Sliced Recurrent Neural Networks

Zeping Yu, Gongshen Liu


Abstract
Recurrent neural networks have achieved great success in many NLP tasks. However, they have difficulty in parallelization because of the recurrent structure, so it takes much time to train RNNs. In this paper, we introduce sliced recurrent neural networks (SRNNs), which could be parallelized by slicing the sequences into many subsequences. SRNNs have the ability to obtain high-level information through multiple layers with few extra parameters. We prove that the standard RNN is a special case of the SRNN when we use linear activation functions. Without changing the recurrent units, SRNNs are 136 times as fast as standard RNNs and could be even faster when we train longer sequences. Experiments on six large-scale sentiment analysis datasets show that SRNNs achieve better performance than standard RNNs.
Anthology ID:
C18-1250
Volume:
Proceedings of the 27th International Conference on Computational Linguistics
Month:
August
Year:
2018
Address:
Santa Fe, New Mexico, USA
Editors:
Emily M. Bender, Leon Derczynski, Pierre Isabelle
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2953–2964
Language:
URL:
https://aclanthology.org/C18-1250
DOI:
Bibkey:
Cite (ACL):
Zeping Yu and Gongshen Liu. 2018. Sliced Recurrent Neural Networks. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2953–2964, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Sliced Recurrent Neural Networks (Yu & Liu, COLING 2018)
Copy Citation:
PDF:
https://aclanthology.org/C18-1250.pdf
Code
 zepingyu0512/srnn +  additional community code
Data
Yelp