Examining Language Modeling Assumptions Using an Annotated Literary Dialect Corpus

Craig Messner, Thomas Lippincott


Abstract
We present a dataset of 19th century American literary orthovariant tokens with a novel layer of human-annotated dialect group tags designed to serve as the basis for computational experiments exploring literarily meaningful orthographic variation. We perform an initial broad set of experiments over this dataset using both token (BERT) and character (CANINE)-level contextual language models. We find indications that the “dialect effect” produced by intentional orthographic variation employs multiple linguistic channels, and that these channels are able to be surfaced to varied degrees given particular language modelling assumptions. Specifically, we find evidence showing that choice of tokenization scheme meaningfully impact the type of orthographic information a model is able to surface.
Anthology ID:
2024.nlp4dh-1.32
Volume:
Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities
Month:
November
Year:
2024
Address:
Miami, USA
Editors:
Mika Hämäläinen, Emily Öhman, So Miyagawa, Khalid Alnajjar, Yuri Bizzoni
Venue:
NLP4DH
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
325–330
Language:
URL:
https://aclanthology.org/2024.nlp4dh-1.32
DOI:
Bibkey:
Cite (ACL):
Craig Messner and Thomas Lippincott. 2024. Examining Language Modeling Assumptions Using an Annotated Literary Dialect Corpus. In Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities, pages 325–330, Miami, USA. Association for Computational Linguistics.
Cite (Informal):
Examining Language Modeling Assumptions Using an Annotated Literary Dialect Corpus (Messner & Lippincott, NLP4DH 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.nlp4dh-1.32.pdf