Jonathan Allen

Also published as: Jonathan All


pdf bib
Lacuna Reconstruction: Self-Supervised Pre-Training for Low-Resource Historical Document Transcription
Nikolai Vogler | Jonathan Allen | Matthew Miller | Taylor Berg-Kirkpatrick
Findings of the Association for Computational Linguistics: NAACL 2022

We present a self-supervised pre-training approach for learning rich visual language representations for both handwritten and printed historical document transcription. After supervised fine-tuning of our pre-trained encoder representations for low-resource document transcription on two languages, (1) a heterogeneous set of handwritten Islamicate manuscript images and (2) early modern English printed documents, we show a meaningful improvement in recognition accuracy over the same supervised model trained from scratch with as few as 30 line image transcriptions for training. Our masked language model-style pre-training strategy, where the model is trained to be able to identify the true masked visual representation from distractors sampled from within the same line, encourages learning robust contextualized language representations invariant to scribal writing style and printing noise present across documents.


pdf bib
Reflections on Twenty Years of the ACL
Jonathan Allen
20th Annual Meeting of the Association for Computational Linguistics


pdf bib
Toward a Computational Theory of Speech Perception
Jonathan All
17th Annual Meeting of the Association for Computational Linguistics