RNN Architecture Learning with Sparse Regularization

Jesse Dodge, Roy Schwartz, Hao Peng, Noah A. Smith


Abstract
Neural models for NLP typically use large numbers of parameters to reach state-of-the-art performance, which can lead to excessive memory usage and increased runtime. We present a structure learning method for learning sparse, parameter-efficient NLP models. Our method applies group lasso to rational RNNs (Peng et al., 2018), a family of models that is closely connected to weighted finite-state automata (WFSAs). We take advantage of rational RNNs’ natural grouping of the weights, so the group lasso penalty directly removes WFSA states, substantially reducing the number of parameters in the model. Our experiments on a number of sentiment analysis datasets, using both GloVe and BERT embeddings, show that our approach learns neural structures which have fewer parameters without sacrificing performance relative to parameter-rich baselines. Our method also highlights the interpretable properties of rational RNNs. We show that sparsifying such models makes them easier to visualize, and we present models that rely exclusively on as few as three WFSAs after pruning more than 90% of the weights. We publicly release our code.
Anthology ID:
D19-1110
Volume:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Kentaro Inui, Jing Jiang, Vincent Ng, Xiaojun Wan
Venues:
EMNLP | IJCNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1179–1184
Language:
URL:
https://aclanthology.org/D19-1110
DOI:
10.18653/v1/D19-1110
Bibkey:
Cite (ACL):
Jesse Dodge, Roy Schwartz, Hao Peng, and Noah A. Smith. 2019. RNN Architecture Learning with Sparse Regularization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1179–1184, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
RNN Architecture Learning with Sparse Regularization (Dodge et al., EMNLP-IJCNLP 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-1110.pdf
Attachment:
 D19-1110.Attachment.pdf
Code
 dodgejesse/sparsifying_regularizers_for_RRNNs