Memory-efficient Transformers via Top-k Attention

Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonathan Berant


Abstract
Following the success of dot-product attention in Transformers, numerous approximations have been recently proposed to address its quadratic complexity with respect to the input length. While these variants are memory and compute efficient, it is not possible to directly use them with popular pre-trained language models trained using vanilla attention, without an expensive corrective pre-training stage. In this work, we propose a simple yet highly accurate approximation for vanilla attention. We process the queries in chunks, and for each query, compute the top-*k* scores with respect to the keys. Our approach offers several advantages: (a) its memory usage is linear in the input size, similar to linear attention variants, such as Performer and RFA (b) it is a drop-in replacement for vanilla attention that does not require any corrective pre-training, and (c) it can also lead to significant memory savings in the feed-forward layers after casting them into the familiar query-key-value framework. We evaluate the quality of top-*k* approximation for multi-head attention layers on the Long Range Arena Benchmark, and for feed-forward layers of T5 and UnifiedQA on multiple QA datasets. We show our approach leads to accuracy that is nearly-identical to vanilla attention in multiple setups including training from scratch, fine-tuning, and zero-shot inference.
Anthology ID:
2021.sustainlp-1.5
Volume:
Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing
Month:
November
Year:
2021
Address:
Virtual
Editors:
Nafise Sadat Moosavi, Iryna Gurevych, Angela Fan, Thomas Wolf, Yufang Hou, Ana Marasović, Sujith Ravi
Venue:
sustainlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
39–52
Language:
URL:
https://aclanthology.org/2021.sustainlp-1.5
DOI:
10.18653/v1/2021.sustainlp-1.5
Bibkey:
Cite (ACL):
Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, and Jonathan Berant. 2021. Memory-efficient Transformers via Top-k Attention. In Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing, pages 39–52, Virtual. Association for Computational Linguistics.
Cite (Informal):
Memory-efficient Transformers via Top-k Attention (Gupta et al., sustainlp 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.sustainlp-1.5.pdf
Video:
 https://aclanthology.org/2021.sustainlp-1.5.mp4
Code
 ag1988/top_k_attention
Data
BoolQCommonsenseQAIMDb Movie ReviewsListOpsMCTestOpenBookQAROPESSQuADWikiText-103WikiText-2