A Joint Learning Approach for Semi-supervised Neural Topic Modeling

Jeffrey Chiu, Rajat Mittal, Neehal Tumma, Abhishek Sharma, Finale Doshi-Velez


Abstract
Topic models are some of the most popular ways to represent textual data in an interpret-able manner. Recently, advances in deep generative models, specifically auto-encoding variational Bayes (AEVB), have led to the introduction of unsupervised neural topic models, which leverage deep generative models as opposed to traditional statistics-based topic models. We extend upon these neural topic models by introducing the Label-Indexed Neural Topic Model (LI-NTM), which is, to the extent of our knowledge, the first effective upstream semi-supervised neural topic model. We find that LI-NTM outperforms existing neural topic models in document reconstruction benchmarks, with the most notable results in low labeled data regimes and for data-sets with informative labels; furthermore, our jointly learned classifier outperforms baseline classifiers in ablation studies.
Anthology ID:
2022.spnlp-1.5
Volume:
Proceedings of the Sixth Workshop on Structured Prediction for NLP
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Andreas Vlachos, Priyanka Agrawal, André Martins, Gerasimos Lampouras, Chunchuan Lyu
Venue:
spnlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
40–51
Language:
URL:
https://aclanthology.org/2022.spnlp-1.5
DOI:
10.18653/v1/2022.spnlp-1.5
Bibkey:
Cite (ACL):
Jeffrey Chiu, Rajat Mittal, Neehal Tumma, Abhishek Sharma, and Finale Doshi-Velez. 2022. A Joint Learning Approach for Semi-supervised Neural Topic Modeling. In Proceedings of the Sixth Workshop on Structured Prediction for NLP, pages 40–51, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
A Joint Learning Approach for Semi-supervised Neural Topic Modeling (Chiu et al., spnlp 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.spnlp-1.5.pdf