Learning Hard Retrieval Decoder Attention for Transformers

Hongfei Xu, Qiuhui Liu, Josef van Genabith, Deyi Xiong


Abstract
The Transformer translation model is based on the multi-head attention mechanism, which can be parallelized easily. The multi-head attention network performs the scaled dot-product attention function in parallel, empowering the model by jointly attending to information from different representation subspaces at different positions. In this paper, we present an approach to learning a hard retrieval attention where an attention head only attends to one token in the sentence rather than all tokens. The matrix multiplication between attention probabilities and the value sequence in the standard scaled dot-product attention can thus be replaced by a simple and efficient retrieval operation. We show that our hard retrieval attention mechanism is 1.43 times faster in decoding, while preserving translation quality on a wide range of machine translation tasks when used in the decoder self- and cross-attention networks.
Anthology ID:
2021.findings-emnlp.67
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
779–785
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.67
DOI:
10.18653/v1/2021.findings-emnlp.67
Bibkey:
Cite (ACL):
Hongfei Xu, Qiuhui Liu, Josef van Genabith, and Deyi Xiong. 2021. Learning Hard Retrieval Decoder Attention for Transformers. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 779–785, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Learning Hard Retrieval Decoder Attention for Transformers (Xu et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.67.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.67.mp4