Predicting Attention Sparsity in Transformers

Marcos Treviso, António Góis, Patrick Fernandes, Erick Fonseca, Andre Martins


Abstract
Transformers’ quadratic complexity with respect to the input sequence length has motivated a body of work on efficient sparse approximations to softmax. An alternative path, used by entmax transformers, consists of having built-in exact sparse attention; however this approach still requires quadratic computation. In this paper, we propose Sparsefinder, a simple model trained to identify the sparsity pattern of entmax attention before computing it. We experiment with three variants of our method, based on distances, quantization, and clustering, on two tasks: machine translation (attention in the decoder) and masked language modeling (encoder-only). Our work provides a new angle to study model efficiency by doing extensive analysis of the tradeoff between the sparsity and recall of the predicted attention graph. This allows for detailed comparison between different models along their Pareto curves, important to guide future benchmarks for sparse attention models.
Anthology ID:
2022.spnlp-1.7
Volume:
Proceedings of the Sixth Workshop on Structured Prediction for NLP
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Andreas Vlachos, Priyanka Agrawal, André Martins, Gerasimos Lampouras, Chunchuan Lyu
Venue:
spnlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
67–81
Language:
URL:
https://aclanthology.org/2022.spnlp-1.7
DOI:
10.18653/v1/2022.spnlp-1.7
Bibkey:
Cite (ACL):
Marcos Treviso, António Góis, Patrick Fernandes, Erick Fonseca, and Andre Martins. 2022. Predicting Attention Sparsity in Transformers. In Proceedings of the Sixth Workshop on Structured Prediction for NLP, pages 67–81, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Predicting Attention Sparsity in Transformers (Treviso et al., spnlp 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.spnlp-1.7.pdf
Data
WikiText-103WikiText-2