SqueezeBERT: What can computer vision teach NLP about efficient neural networks?

Forrest Iandola, Albert Shaw, Ravi Krishna, Kurt Keutzer


Abstract
Humans read and write hundreds of billions of messages every day. Further, due to the availability of large datasets, large computing systems, and better neural network models, natural language processing (NLP) technology has made significant strides in understanding, proofreading, and organizing these messages. Thus, there is a significant opportunity to deploy NLP in myriad applications to help web users, social networks, and businesses. Toward this end, we consider smartphones and other mobile devices as crucial platforms for deploying NLP models at scale. However, today’s highly-accurate NLP neural network models such as BERT and RoBERTa are extremely computationally expensive, with BERT-base taking 1.7 seconds to classify a text snippet on a Pixel 3 smartphone. To begin to address this problem, we draw inspiration from the computer vision community, where work such as MobileNet has demonstrated that grouped convolutions (e.g. depthwise convolutions) can enable speedups without sacrificing accuracy. We demonstrate how to replace several operations in self-attention layers with grouped convolutions, and we use this technique in a novel network architecture called SqueezeBERT, which runs 4.3x faster than BERT-base on the Pixel 3 while achieving competitive accuracy on the GLUE test set. A PyTorch-based implementation of SqueezeBERT is available as part of the Hugging Face Transformers library: https://huggingface.co/squeezebert
Anthology ID:
2020.sustainlp-1.17
Volume:
Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing
Month:
November
Year:
2020
Address:
Online
Editors:
Nafise Sadat Moosavi, Angela Fan, Vered Shwartz, Goran Glavaš, Shafiq Joty, Alex Wang, Thomas Wolf
Venue:
sustainlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
124–135
Language:
URL:
https://aclanthology.org/2020.sustainlp-1.17
DOI:
10.18653/v1/2020.sustainlp-1.17
Bibkey:
Cite (ACL):
Forrest Iandola, Albert Shaw, Ravi Krishna, and Kurt Keutzer. 2020. SqueezeBERT: What can computer vision teach NLP about efficient neural networks?. In Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing, pages 124–135, Online. Association for Computational Linguistics.
Cite (Informal):
SqueezeBERT: What can computer vision teach NLP about efficient neural networks? (Iandola et al., sustainlp 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.sustainlp-1.17.pdf
Optional supplementary material:
 2020.sustainlp-1.17.OptionalSupplementaryMaterial.zip
Video:
 https://slideslive.com/38939439
Code
 huggingface/transformers +  additional community code
Data
CoLAGLUEMRPCMultiNLIQNLIQuora Question PairsRTESSTSST-2WNLI