Value-aware Approximate Attention

Ankit Gupta, Jonathan Berant


Abstract
Following the success of dot-product attention in Transformers, numerous approximations have been recently proposed to address its quadratic complexity with respect to the input length. However, all approximations thus far have ignored the contribution of the *value vectors* to the quality of approximation. In this work, we argue that research efforts should be directed towards approximating the true output of the attention sub-layer, which includes the value vectors. We propose a value-aware objective, and show theoretically and empirically that an optimal approximation of a value-aware objective substantially outperforms an optimal approximation that ignores values, in the context of language modeling. Moreover, we show that the choice of kernel function for computing attention similarity can substantially affect the quality of sparse approximations, where kernel functions that are less skewed are more affected by the value vectors.
Anthology ID:
2021.emnlp-main.753
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9567–9574
Language:
URL:
https://aclanthology.org/2021.emnlp-main.753
DOI:
10.18653/v1/2021.emnlp-main.753
Bibkey:
Cite (ACL):
Ankit Gupta and Jonathan Berant. 2021. Value-aware Approximate Attention. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9567–9574, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Value-aware Approximate Attention (Gupta & Berant, EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.753.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.753.mp4
Code
 ag1988/value_aware_attn