%0 Conference Proceedings %T Is Neural Topic Modelling Better than Clustering? An Empirical Study on Clustering with Contextual Embeddings for Topics %A Zhang, Zihan %A Fang, Meng %A Chen, Ling %A Namazi Rad, Mohammad Reza %Y Carpuat, Marine %Y de Marneffe, Marie-Catherine %Y Meza Ruiz, Ivan Vladimir %S Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies %D 2022 %8 July %I Association for Computational Linguistics %C Seattle, United States %F zhang-etal-2022-neural %X Recent work incorporates pre-trained word embeddings such as BERT embeddings into Neural Topic Models (NTMs), generating highly coherent topics. However, with high-quality contextualized document representations, do we really need sophisticated neural models to obtain coherent and interpretable topics? In this paper, we conduct thorough experiments showing that directly clustering high-quality sentence embeddings with an appropriate word selecting method can generate more coherent and diverse topics than NTMs, achieving also higher efficiency and simplicity. %R 10.18653/v1/2022.naacl-main.285 %U https://aclanthology.org/2022.naacl-main.285 %U https://doi.org/10.18653/v1/2022.naacl-main.285 %P 3886-3893