Ziye Chen
2021
Tree-Structured Topic Modeling with Nonparametric Neural Variational Inference
Ziye Chen
|
Cheng Ding
|
Zusheng Zhang
|
Yanghui Rao
|
Haoran Xie
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Topic modeling has been widely used for discovering the latent semantic structure of documents, but most existing methods learn topics with a flat structure. Although probabilistic models can generate topic hierarchies by introducing nonparametric priors like Chinese restaurant process, such methods have data scalability issues. In this study, we develop a tree-structured topic model by leveraging nonparametric neural variational inference. Particularly, the latent components of the stick-breaking process are first learned for each document, then the affiliations of latent components are modeled by the dependency matrices between network layers. Utilizing this network structure, we can efficiently extract a tree-structured topic hierarchy with reasonable structure, low redundancy, and adaptable widths. Experiments on real-world datasets validate the effectiveness of our method.
2020
Neural Mixed Counting Models for Dispersed Topic Discovery
Jiemin Wu
|
Yanghui Rao
|
Zusheng Zhang
|
Haoran Xie
|
Qing Li
|
Fu Lee Wang
|
Ziye Chen
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Mixed counting models that use the negative binomial distribution as the prior can well model over-dispersed and hierarchically dependent random variables; thus they have attracted much attention in mining dispersed document topics. However, the existing parameter inference method like Monte Carlo sampling is quite time-consuming. In this paper, we propose two efficient neural mixed counting models, i.e., the Negative Binomial-Neural Topic Model (NB-NTM) and the Gamma Negative Binomial-Neural Topic Model (GNB-NTM) for dispersed topic discovery. Neural variational inference algorithms are developed to infer model parameters by using the reparameterization of Gamma distribution and the Gaussian approximation of Poisson distribution. Experiments on real-world datasets indicate that our models outperform state-of-the-art baseline models in terms of perplexity and topic coherence. The results also validate that both NB-NTM and GNB-NTM can produce explainable intermediate variables by generating dispersed proportions of document topics.
Search
Co-authors
- Zusheng Zhang 2
- Yanghui Rao 2
- Haoran Xie 2
- Cheng Ding 1
- Jiemin Wu 1
- show all...