Efficient Long-Range Transformers: You Need to Attend More, but Not Necessarily at Every Layer

Qingru Zhang, Dhananjay Ram, Cole Hawkins, Sheng Zha, Tuo Zhao


Abstract
Pretrained transformer models have demonstrated remarkable performance across various natural language processing tasks. These models leverage the attention mechanism to capture long- and short-range dependencies in the sequence. However, the (full) attention mechanism incurs high computational cost – quadratic in the sequence length, which is not affordable in tasks with long sequences, e.g., inputs with 8k tokens. Although sparse attention can be used to improve computational efficiency, as suggested in existing work, it has limited modeling capacity and often fails to capture complicated dependencies in long sequences. To tackle this challenge, we propose MASFormer, an easy-to-implement transformer variant with mixed attention spans. Specifically, MASFormer is equipped with full attention to capture long-range dependencies, but only at a small number of layers. For the remaining layers, MASformer only employs sparse attention to capture short-range dependencies. Our experiments on natural language modeling and generation tasks show that a decoder-only MASFormer model of 1.3B parameters can achieve competitive performance to vanilla transformers with full attention while significantly reducing computational cost (up to 75%). Additionally, we investigate the effectiveness of continual training with long sequence data and how sequence length impacts downstream generation performance, which may be of independent interest.
Anthology ID:
2023.findings-emnlp.183
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2775–2786
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.183
DOI:
10.18653/v1/2023.findings-emnlp.183
Bibkey:
Cite (ACL):
Qingru Zhang, Dhananjay Ram, Cole Hawkins, Sheng Zha, and Tuo Zhao. 2023. Efficient Long-Range Transformers: You Need to Attend More, but Not Necessarily at Every Layer. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2775–2786, Singapore. Association for Computational Linguistics.
Cite (Informal):
Efficient Long-Range Transformers: You Need to Attend More, but Not Necessarily at Every Layer (Zhang et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.183.pdf