Natural Language Processing with Small Feed-Forward Networks

Jan A. Botha, Emily Pitler, Ji Ma, Anton Bakalov, Alex Salcianu, David Weiss, Ryan McDonald, Slav Petrov


Abstract
We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models. Motivated by resource-constrained environments like mobile phones, we showcase simple techniques for obtaining such small neural network models, and investigate different tradeoffs when deciding how to allocate a small memory budget.
Anthology ID:
D17-1309
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2879–2885
Language:
URL:
https://aclanthology.org/D17-1309
DOI:
10.18653/v1/D17-1309
Bibkey:
Cite (ACL):
Jan A. Botha, Emily Pitler, Ji Ma, Anton Bakalov, Alex Salcianu, David Weiss, Ryan McDonald, and Slav Petrov. 2017. Natural Language Processing with Small Feed-Forward Networks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2879–2885, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Natural Language Processing with Small Feed-Forward Networks (Botha et al., EMNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/D17-1309.pdf
Attachment:
 D17-1309.Attachment.zip
Video:
 https://aclanthology.org/D17-1309.mp4