End to End Binarized Neural Networks for Text Classification

Kumar Shridhar, Harshil Jain, Akshat Agarwal, Denis Kleyko


Abstract
Deep neural networks have demonstrated their superior performance in almost every Natural Language Processing task, however, their increasing complexity raises concerns. A particular concern is that these networks pose high requirements for computing hardware and training budgets. The state-of-the-art transformer models are a vivid example. Simplifying the computations performed by a network is one way of addressing the issue of the increasing complexity. In this paper, we propose an end to end binarized neural network for the task of intent and text classification. In order to fully utilize the potential of end to end binarization, both the input representations (vector embeddings of tokens statistics) and the classifier are binarized. We demonstrate the efficiency of such a network on the intent classification of short texts over three datasets and text classification with a larger dataset. On the considered datasets, the proposed network achieves comparable to the state-of-the-art results while utilizing 20-40% lesser memory and training time compared to the benchmarks.
Anthology ID:
2020.sustainlp-1.4
Volume:
Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | sustainlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
29–34
Language:
URL:
https://aclanthology.org/2020.sustainlp-1.4
DOI:
10.18653/v1/2020.sustainlp-1.4
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2020.sustainlp-1.4.pdf
Optional supplementary material:
 2020.sustainlp-1.4.OptionalSupplementaryMaterial.zip
Video:
 https://slideslive.com/38939423