UltraSparseBERT: 99% Conditionally Sparse Language Modelling

Peter Belcak, Roger Wattenhofer


Abstract
We present UltraSparseBERT, a BERT variant that uses 0.3% of its neurons during inference while performing on par with similar BERT models. UltraSparseBERT selectively engages just 12 out of 4095 neurons for each layer inference. This is achieved by reorganizing feedforward networks into fast feedforward networks (FFFs).To showcase but one benefit of high sparsity, we provide an Intel MKL implementation achieving 78x speedup over the optimized feedforward baseline on CPUs, and an OpenAI Triton implementation performing forward passes 4.1x faster than the corresponding native GPU implementation. The training and benchmarking code is enclosed.
Anthology ID:
2024.acl-short.10
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
104–108
Language:
URL:
https://aclanthology.org/2024.acl-short.10
DOI:
Bibkey:
Cite (ACL):
Peter Belcak and Roger Wattenhofer. 2024. UltraSparseBERT: 99% Conditionally Sparse Language Modelling. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 104–108, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
UltraSparseBERT: 99% Conditionally Sparse Language Modelling (Belcak & Wattenhofer, ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-short.10.pdf