Pit One Against Many: Leveraging Attention-head Embeddings for Parameter-efficient Multi-head Attention

Huiyin Xue, Nikolaos Aletras


Abstract
Scaling pre-trained language models has resulted in large performance gains in various natural language processing tasks but comes with a large cost in memory requirements. Inspired by the position embeddings in transformers, we aim to simplify and reduce the memory footprint of the multi-head attention (MHA) mechanism. We propose an alternative module that uses only a single shared projection matrix and multiple head embeddings (MHE), i.e. one per head. We empirically demonstrate that our MHE attention is substantially more memory efficient compared to alternative attention mechanisms while achieving high predictive performance retention ratio to vanilla MHA on several downstream tasks. MHE attention only requires a negligible fraction of additional parameters (3nd, where n is the number of attention heads and d the size of the head embeddings) compared to a single-head attention, while MHA requires (3n2-3n)d2-3nd additional parameters.
Anthology ID:
2023.findings-emnlp.695
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10355–10373
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.695
DOI:
10.18653/v1/2023.findings-emnlp.695
Bibkey:
Cite (ACL):
Huiyin Xue and Nikolaos Aletras. 2023. Pit One Against Many: Leveraging Attention-head Embeddings for Parameter-efficient Multi-head Attention. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10355–10373, Singapore. Association for Computational Linguistics.
Cite (Informal):
Pit One Against Many: Leveraging Attention-head Embeddings for Parameter-efficient Multi-head Attention (Xue & Aletras, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.695.pdf