Reduce & Attribute: Two-Step Authorship Attribution for Large-Scale Problems

Michael Tschuggnall, Benjamin Murauer, Günther Specht


Abstract
Authorship attribution is an active research area which has been prevalent for many decades. Nevertheless, the majority of approaches consider problem sizes of a few candidate authors only, making them difficult to apply to recent scenarios incorporating thousands of authors emerging due to the manifold means to digitally share text. In this study, we focus on such large-scale problems and propose to effectively reduce the number of candidate authors before applying common attribution techniques. By utilizing document embeddings, we show on a novel, comprehensive dataset collection that the set of candidate authors can be reduced with high accuracy. Moreover, we show that common authorship attribution methods substantially benefit from a preliminary reduction if thousands of authors are involved.
Anthology ID:
K19-1089
Volume:
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Mohit Bansal, Aline Villavicencio
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
951–960
Language:
URL:
https://aclanthology.org/K19-1089
DOI:
10.18653/v1/K19-1089
Bibkey:
Cite (ACL):
Michael Tschuggnall, Benjamin Murauer, and Günther Specht. 2019. Reduce & Attribute: Two-Step Authorship Attribution for Large-Scale Problems. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 951–960, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Reduce & Attribute: Two-Step Authorship Attribution for Large-Scale Problems (Tschuggnall et al., CoNLL 2019)
Copy Citation:
PDF:
https://aclanthology.org/K19-1089.pdf