%0 Conference Proceedings %T Measuring the Effects of Bias in Training Data for Literary Classification %A Bagga, Sunyam %A Piper, Andrew %Y DeGaetano, Stefania %Y Kazantseva, Anna %Y Reiter, Nils %Y Szpakowicz, Stan %S Proceedings of the 4th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature %D 2020 %8 December %I International Committee on Computational Linguistics %C Online %F bagga-piper-2020-measuring %X Downstream effects of biased training data have become a major concern of the NLP community. How this may impact the automated curation and annotation of cultural heritage material is currently not well known. In this work, we create an experimental framework to measure the effects of different types of stylistic and social bias within training data for the purposes of literary classification, as one important subclass of cultural material. Because historical collections are often sparsely annotated, much like our knowledge of history is incomplete, researchers often cannot know the underlying distributions of different document types and their various sub-classes. This means that bias is likely to be an intrinsic feature of training data when it comes to cultural heritage material. Our aim in this study is to investigate which classification methods may help mitigate the effects of different types of bias within curated samples of training data. We find that machine learning techniques such as BERT or SVM are robust against reproducing the different kinds of bias within our test data, except in the most extreme cases. We hope that this work will spur further research into the potential effects of bias within training data for other cultural heritage material beyond the study of literature. %U https://aclanthology.org/2020.latechclfl-1.9 %P 74-84