Quantifying the Effects of Text Duplication on Semantic Models

Alexandra Schofield, Laure Thompson, David Mimno


Abstract
Duplicate documents are a pervasive problem in text datasets and can have a strong effect on unsupervised models. Methods to remove duplicate texts are typically heuristic or very expensive, so it is vital to know when and why they are needed. We measure the sensitivity of two latent semantic methods to the presence of different levels of document repetition. By artificially creating different forms of duplicate text we confirm several hypotheses about how repeated text impacts models. While a small amount of duplication is tolerable, substantial over-representation of subsets of the text may overwhelm meaningful topical patterns.
Anthology ID:
D17-1290
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2737–2747
Language:
URL:
https://aclanthology.org/D17-1290
DOI:
10.18653/v1/D17-1290
Bibkey:
Cite (ACL):
Alexandra Schofield, Laure Thompson, and David Mimno. 2017. Quantifying the Effects of Text Duplication on Semantic Models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2737–2747, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Quantifying the Effects of Text Duplication on Semantic Models (Schofield et al., EMNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/D17-1290.pdf
Data
New York Times Annotated Corpus