Variational Attention for Sequence-to-Sequence Models

Hareesh Bahuleyan, Lili Mou, Olga Vechtomova, Pascal Poupart


Abstract
The variational encoder-decoder (VED) encodes source information as a set of random variables using a neural network, which in turn is decoded into target data using another neural network. In natural language processing, sequence-to-sequence (Seq2Seq) models typically serve as encoder-decoder networks. When combined with a traditional (deterministic) attention mechanism, the variational latent space may be bypassed by the attention model, and thus becomes ineffective. In this paper, we propose a variational attention mechanism for VED, where the attention vector is also modeled as Gaussian distributed random variables. Results on two experiments show that, without loss of quality, our proposed method alleviates the bypassing phenomenon as it increases the diversity of generated sentences.
Anthology ID:
C18-1142
Volume:
Proceedings of the 27th International Conference on Computational Linguistics
Month:
August
Year:
2018
Address:
Santa Fe, New Mexico, USA
Editors:
Emily M. Bender, Leon Derczynski, Pierre Isabelle
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1672–1682
Language:
URL:
https://aclanthology.org/C18-1142
DOI:
Bibkey:
Cite (ACL):
Hareesh Bahuleyan, Lili Mou, Olga Vechtomova, and Pascal Poupart. 2018. Variational Attention for Sequence-to-Sequence Models. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1672–1682, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Variational Attention for Sequence-to-Sequence Models (Bahuleyan et al., COLING 2018)
Copy Citation:
PDF:
https://aclanthology.org/C18-1142.pdf
Code
 HareeshBahuleyan/tf-var-attention +  additional community code