Batavia asked for advice. Pretrained language models for Named Entity Recognition in historical texts.

Sophie I. Arnoult, Lodewijk Petram, Piek Vossen


Abstract
Pretrained language models like BERT have advanced the state of the art for many NLP tasks. For resource-rich languages, one has the choice between a number of language-specific models, while multilingual models are also worth considering. These models are well known for their crosslingual performance, but have also shown competitive in-language performance on some tasks. We consider monolingual and multilingual models from the perspective of historical texts, and in particular for texts enriched with editorial notes: how do language models deal with the historical and editorial content in these texts? We present a new Named Entity Recognition dataset for Dutch based on 17th and 18th century United East India Company (VOC) reports extended with modern editorial notes. Our experiments with multilingual and Dutch pretrained language models confirm the crosslingual abilities of multilingual models while showing that all language models can leverage mixed-variant data. In particular, language models successfully incorporate notes for the prediction of entities in historical texts. We also find that multilingual models outperform monolingual models on our data, but that this superiority is linked to the task at hand: multilingual models lose their advantage when confronted with more semantical tasks.
Anthology ID:
2021.latechclfl-1.3
Volume:
Proceedings of the 5th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic (online)
Editors:
Stefania Degaetano-Ortlieb, Anna Kazantseva, Nils Reiter, Stan Szpakowicz
Venue:
LaTeCHCLfL
SIG:
SIGHUM
Publisher:
Association for Computational Linguistics
Note:
Pages:
21–30
Language:
URL:
https://aclanthology.org/2021.latechclfl-1.3
DOI:
10.18653/v1/2021.latechclfl-1.3
Bibkey:
Cite (ACL):
Sophie I. Arnoult, Lodewijk Petram, and Piek Vossen. 2021. Batavia asked for advice. Pretrained language models for Named Entity Recognition in historical texts.. In Proceedings of the 5th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pages 21–30, Punta Cana, Dominican Republic (online). Association for Computational Linguistics.
Cite (Informal):
Batavia asked for advice. Pretrained language models for Named Entity Recognition in historical texts. (Arnoult et al., LaTeCHCLfL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.latechclfl-1.3.pdf
Video:
 https://aclanthology.org/2021.latechclfl-1.3.mp4
Code
 cltl/voc-missives