On the Locality of Attention in Direct Speech Translation

Belen Alastruey, Javier Ferrando, Gerard I. Gállego, Marta R. Costa-jussà


Abstract
Transformers have achieved state-of-the-art results across multiple NLP tasks. However, the self-attention mechanism complexity scales quadratically with the sequence length, creating an obstacle for tasks involving long sequences, like in the speech domain. In this paper, we discuss the usefulness of self-attention for Direct Speech Translation. First, we analyze the layer-wise token contributions in the self-attention of the encoder, unveiling local diagonal patterns. To prove that some attention weights are avoidable, we propose to substitute the standard self-attention with a local efficient one, setting the amount of context used based on the results of the analysis. With this approach, our model matches the baseline performance, and improves the efficiency by skipping the computation of those weights that standard attention discards.
Anthology ID:
2022.acl-srw.32
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Samuel Louvan, Andrea Madotto, Brielen Madureira
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
402–412
Language:
URL:
https://aclanthology.org/2022.acl-srw.32
DOI:
10.18653/v1/2022.acl-srw.32
Bibkey:
Cite (ACL):
Belen Alastruey, Javier Ferrando, Gerard I. Gállego, and Marta R. Costa-jussà. 2022. On the Locality of Attention in Direct Speech Translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 402–412, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
On the Locality of Attention in Direct Speech Translation (Alastruey et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-srw.32.pdf
Video:
 https://aclanthology.org/2022.acl-srw.32.mp4