Reinforcement Learning for Topic Models

Jeremy Costello, Marek Reformat


Abstract
We apply reinforcement learning techniques to topic modeling by replacing the variational autoencoder in ProdLDA with a continuous action space reinforcement learning policy. We train the system with a policy gradient algorithm REINFORCE. Additionally, we introduced several modifications: modernize the neural network architecture, weight the ELBO loss, use contextual embeddings, and monitor the learning process via computing topic diversity and coherence for each training step. Experiments areperformed on 11 data sets. Our unsupervised model outperforms all other unsupervised models and performs on par with or better than most models using supervised labeling. Our model is outperformed on certain data sets by a model using supervised labeling and contrastive learning. We have also conducted an ablation study to provide empirical evidence of performance improvements from changes we made to ProdLDA and found that the reinforcement learning formulation boosts performance. We open-source our code implementation.
Anthology ID:
2023.findings-acl.265
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4332–4351
Language:
URL:
https://aclanthology.org/2023.findings-acl.265
DOI:
10.18653/v1/2023.findings-acl.265
Bibkey:
Cite (ACL):
Jeremy Costello and Marek Reformat. 2023. Reinforcement Learning for Topic Models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 4332–4351, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Reinforcement Learning for Topic Models (Costello & Reformat, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.265.pdf
Video:
 https://aclanthology.org/2023.findings-acl.265.mp4