On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation

Victor Prokhorov, Ehsan Shareghi, Yingzhen Li, Mohammad Taher Pilehvar, Nigel Collier


Abstract
Variational Autoencoders (VAEs) are known to suffer from learning uninformative latent representation of the input due to issues such as approximated posterior collapse, or entanglement of the latent space. We impose an explicit constraint on the Kullback-Leibler (KL) divergence term inside the VAE objective function. While the explicit constraint naturally avoids posterior collapse, we use it to further understand the significance of the KL term in controlling the information transmitted through the VAE channel. Within this framework, we explore different properties of the estimated posterior distribution, and highlight the trade-off between the amount of information encoded in a latent code during training, and the generative capacity of the model.
Anthology ID:
D19-5612
Volume:
Proceedings of the 3rd Workshop on Neural Generation and Translation
Month:
November
Year:
2019
Address:
Hong Kong
Editors:
Alexandra Birch, Andrew Finch, Hiroaki Hayashi, Ioannis Konstas, Thang Luong, Graham Neubig, Yusuke Oda, Katsuhito Sudoh
Venue:
NGT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
118–127
Language:
URL:
https://aclanthology.org/D19-5612
DOI:
10.18653/v1/D19-5612
Bibkey:
Cite (ACL):
Victor Prokhorov, Ehsan Shareghi, Yingzhen Li, Mohammad Taher Pilehvar, and Nigel Collier. 2019. On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 118–127, Hong Kong. Association for Computational Linguistics.
Cite (Informal):
On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation (Prokhorov et al., NGT 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-5612.pdf
Code
 VictorProkhorov/KL_Text_VAE
Data
WebText