Addressing Topic Leakage in Cross-Topic Evaluation for Authorship Verification

Jitkapat Sawatphol, Can Udomcharoenchaikit, Sarana Nutanong


Abstract
Authorship verification (AV) aims to identify whether a pair of texts has the same author. We address the challenge of evaluating AV models’ robustness against topic shifts. The conventional evaluation assumes minimal topic overlap between training and test data. However, we argue that there can still be topic leakage in test data, causing misleading model performance and unstable rankings. To address this, we propose an evaluation method called Heterogeneity-Informed Topic Sampling (HITS), which creates a smaller dataset with a heterogeneously distributed topic set. Our experimental results demonstrate that HITS-sampled datasets yield a more stable ranking of models across random seeds and evaluation splits. Our contributions include: 1. An analysis of causes and effects of topic leakage; 2. A demonstration of the HITS in reducing the effects of topic leakage; and 3. The Robust Authorship Verification bENchmark (RAVEN) that allows topic shortcut test to uncover AV models’ reliance on topic-specific features.
Anthology ID:
2024.tacl-1.75
Volume:
Transactions of the Association for Computational Linguistics, Volume 12
Month:
Year:
2024
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
1363–1377
Language:
URL:
https://aclanthology.org/2024.tacl-1.75
DOI:
10.1162/tacl_a_00709
Bibkey:
Cite (ACL):
Jitkapat Sawatphol, Can Udomcharoenchaikit, and Sarana Nutanong. 2024. Addressing Topic Leakage in Cross-Topic Evaluation for Authorship Verification. Transactions of the Association for Computational Linguistics, 12:1363–1377.
Cite (Informal):
Addressing Topic Leakage in Cross-Topic Evaluation for Authorship Verification (Sawatphol et al., TACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.tacl-1.75.pdf