Enhancing Variational Autoencoders with Mutual Information Neural Estimation for Text Generation

Dong Qian, William K. Cheung


Abstract
While broadly applicable to many natural language processing (NLP) tasks, variational autoencoders (VAEs) are hard to train due to the posterior collapse issue where the latent variable fails to encode the input data effectively. Various approaches have been proposed to alleviate this problem to improve the capability of the VAE. In this paper, we propose to introduce a mutual information (MI) term between the input and its latent variable to regularize the objective of the VAE. Since estimating the MI in the high-dimensional space is intractable, we employ neural networks for the estimation of the MI and provide a training algorithm based on the convex duality approach. Our experimental results on three benchmark datasets demonstrate that the proposed model, compared to the state-of-the-art baselines, exhibits less posterior collapse and has comparable or better performance in language modeling and text generation. We also qualitatively evaluate the inferred latent space and show that the proposed model can generate more reasonable and diverse sentences via linear interpolation in the latent space.
Anthology ID:
D19-1416
Volume:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Kentaro Inui, Jing Jiang, Vincent Ng, Xiaojun Wan
Venues:
EMNLP | IJCNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4047–4057
Language:
URL:
https://aclanthology.org/D19-1416
DOI:
10.18653/v1/D19-1416
Bibkey:
Cite (ACL):
Dong Qian and William K. Cheung. 2019. Enhancing Variational Autoencoders with Mutual Information Neural Estimation for Text Generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4047–4057, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Enhancing Variational Autoencoders with Mutual Information Neural Estimation for Text Generation (Qian & Cheung, EMNLP-IJCNLP 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-1416.pdf
Attachment:
 D19-1416.Attachment.pdf