%0 Conference Proceedings %T Learning VAE-LDA Models with Rounded Reparameterization Trick %A Tian, Runzhi %A Mao, Yongyi %A Zhang, Richong %Y Webber, Bonnie %Y Cohn, Trevor %Y He, Yulan %Y Liu, Yang %S Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) %D 2020 %8 November %I Association for Computational Linguistics %C Online %F tian-etal-2020-learning %X The introduction of VAE provides an efficient framework for the learning of generative models, including generative topic models. However, when the topic model is a Latent Dirichlet Allocation (LDA) model, a central technique of VAE, the reparameterization trick, fails to be applicable. This is because no reparameterization form of Dirichlet distributions is known to date that allows the use of the reparameterization trick. In this work, we propose a new method, which we call Rounded Reparameterization Trick (RRT), to reparameterize Dirichlet distributions for the learning of VAE-LDA models. This method, when applied to a VAE-LDA model, is shown experimentally to outperform the existing neural topic models on several benchmark datasets and on a synthetic dataset. %R 10.18653/v1/2020.emnlp-main.101 %U https://aclanthology.org/2020.emnlp-main.101 %U https://doi.org/10.18653/v1/2020.emnlp-main.101 %P 1315-1325