Aligning Human and Computational Coherence Evaluations

Jia Peng Lim, Hady W. Lauw


Abstract
Automated coherence metrics constitute an efficient and popular way to evaluate topic models. Previous work presents a mixed picture of their presumed correlation with human judgment. This work proposes a novel sampling approach to mining topic representations at a large scale while seeking to mitigate bias from sampling, enabling the investigation of widely used automated coherence metrics via large corpora. Additionally, this article proposes a novel user study design, an amalgamation of different proxy tasks, to derive a finer insight into the human decision-making processes. This design subsumes the purpose of simple rating and outlier-detection user studies. Similar to the sampling approach, the user study conducted is extensive, comprising 40 study participants split into eight different study groups tasked with evaluating their respective set of 100 topic representations. Usually, when substantiating the use of these metrics, human responses are treated as the gold standard. This article further investigates the reliability of human judgment by flipping the comparison and conducting a novel extended analysis of human response at the group and individual level against a generic corpus. The investigation results show a moderate to good correlation between these metrics and human judgment, especially for generic corpora, and derive further insights into the human perception of coherence. Analyzing inter-metric correlations across corpora shows moderate to good correlation among these metrics. As these metrics depend on corpus statistics, this article further investigates the topical differences between corpora, revealing nuances in applications of these metrics.
Anthology ID:
2024.cl-3.3
Volume:
Computational Linguistics, Volume 50, Issue 3 - September 2024
Month:
September
Year:
2024
Address:
Cambridge, MA
Venue:
CL
SIG:
Publisher:
MIT Press
Note:
Pages:
893–952
Language:
URL:
https://aclanthology.org/2024.cl-3.3
DOI:
10.1162/coli_a_00518
Bibkey:
Cite (ACL):
Jia Peng Lim and Hady W. Lauw. 2024. Aligning Human and Computational Coherence Evaluations. Computational Linguistics, 50(3):893–952.
Cite (Informal):
Aligning Human and Computational Coherence Evaluations (Lim & Lauw, CL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.cl-3.3.pdf