Sparse Parallel Training of Hierarchical Dirichlet Process Topic Models

Alexander Terenin, Måns Magnusson, Leif Jonsson


Abstract
To scale non-parametric extensions of probabilistic topic models such as Latent Dirichlet allocation to larger data sets, practitioners rely increasingly on parallel and distributed systems. In this work, we study data-parallel training for the hierarchical Dirichlet process (HDP) topic model. Based upon a representation of certain conditional distributions within an HDP, we propose a doubly sparse data-parallel sampler for the HDP topic model. This sampler utilizes all available sources of sparsity found in natural language - an important way to make computation efficient. We benchmark our method on a well-known corpus (PubMed) with 8m documents and 768m tokens, using a single multi-core machine in under four days.
Anthology ID:
2020.emnlp-main.234
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2925–2934
Language:
URL:
https://aclanthology.org/2020.emnlp-main.234
DOI:
10.18653/v1/2020.emnlp-main.234
Bibkey:
Cite (ACL):
Alexander Terenin, Måns Magnusson, and Leif Jonsson. 2020. Sparse Parallel Training of Hierarchical Dirichlet Process Topic Models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2925–2934, Online. Association for Computational Linguistics.
Cite (Informal):
Sparse Parallel Training of Hierarchical Dirichlet Process Topic Models (Terenin et al., EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.234.pdf
Optional supplementary material:
 2020.emnlp-main.234.OptionalSupplementaryMaterial.zip
Video:
 https://slideslive.com/38938922
Code
 aterenin/Parallel-HDP-Experiments