Learning VAE-LDA Models with Rounded Reparameterization Trick

Runzhi Tian, Yongyi Mao, Richong Zhang


Abstract
The introduction of VAE provides an efficient framework for the learning of generative models, including generative topic models. However, when the topic model is a Latent Dirichlet Allocation (LDA) model, a central technique of VAE, the reparameterization trick, fails to be applicable. This is because no reparameterization form of Dirichlet distributions is known to date that allows the use of the reparameterization trick. In this work, we propose a new method, which we call Rounded Reparameterization Trick (RRT), to reparameterize Dirichlet distributions for the learning of VAE-LDA models. This method, when applied to a VAE-LDA model, is shown experimentally to outperform the existing neural topic models on several benchmark datasets and on a synthetic dataset.
Anthology ID:
2020.emnlp-main.101
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1315–1325
Language:
URL:
https://aclanthology.org/2020.emnlp-main.101
DOI:
10.18653/v1/2020.emnlp-main.101
Bibkey:
Cite (ACL):
Runzhi Tian, Yongyi Mao, and Richong Zhang. 2020. Learning VAE-LDA Models with Rounded Reparameterization Trick. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1315–1325, Online. Association for Computational Linguistics.
Cite (Informal):
Learning VAE-LDA Models with Rounded Reparameterization Trick (Tian et al., EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.101.pdf
Video:
 https://slideslive.com/38939213