Neural Embedding Allocation: Distributed Representations of Topic Models

Kamrun Naher Keya, Yannis Papanikolaou, James R. Foulds


Abstract
We propose a method that uses neural embeddings to improve the performance of any given LDA-style topic model. Our method, called neural embedding allocation (NEA), deconstructs topic models (LDA or otherwise) into interpretable vector-space embeddings of words, topics, documents, authors, and so on, by learning neural embeddings to mimic the topic model. We demonstrate that NEA improves coherence scores of the original topic model by smoothing out the noisy topics when the number of topics is large. Furthermore, we show NEA’s effectiveness and generality in deconstructing and smoothing LDA, author-topic models, and the recent mixed membership skip-gram topic model and achieve better performance with the embeddings compared to several state-of-the-art models.
Anthology ID:
2022.cl-4.18
Volume:
Computational Linguistics, Volume 48, Issue 4 - December 2022
Month:
December
Year:
2022
Address:
Cambridge, MA
Venue:
CL
SIG:
Publisher:
MIT Press
Note:
Pages:
1021–1052
Language:
URL:
https://aclanthology.org/2022.cl-4.18
DOI:
10.1162/coli_a_00457
Bibkey:
Cite (ACL):
Kamrun Naher Keya, Yannis Papanikolaou, and James R. Foulds. 2022. Neural Embedding Allocation: Distributed Representations of Topic Models. Computational Linguistics, 48(4):1021–1052.
Cite (Informal):
Neural Embedding Allocation: Distributed Representations of Topic Models (Keya et al., CL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.cl-4.18.pdf