Elizabeth Thorner
2024
Analyzing Finetuned Vision Models for Mixtec Codex Interpretation
Alexander Webber
|
Zachary Sayers
|
Amy Wu
|
Elizabeth Thorner
|
Justin Witter
|
Gabriel Ayoubi
|
Christan Grant
Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)
Throughout history, pictorial record-keeping has been used to document events, stories, and concepts. A popular example of this is the Tzolk’in Maya Calendar. The pre-Columbian Mixtec society also recorded many works through graphical media called codices that depict both stories and real events. Mixtec codices are unique because the depicted scenes are highly structured within and across documents. As a first effort toward translation, we created two binary classification tasks over Mixtec codices, namely, gender and pose. The composition of figures within a codex is essential for understanding the codex’s narrative. We labeled a dataset with around 1300 figures drawn from three codices of varying qualities. We finetuned the Visual Geometry Group 16 (VGG-16) and Vision Transformer 16 (ViT-16) models, measured their performance, and compared learned features with expert opinions found in literature. The results show that when finetuned, both VGG and ViT perform well, with the transformer-based architecture (ViT) outperforming the CNN-based architecture (VGG) at higher learning rates. We are releasing this work to allow collaboration with the Mixtec community and domain scientists.
Search
Co-authors
- Alexander Webber 1
- Zachary Sayers 1
- Amy Wu 1
- Justin Witter 1
- Gabriel Ayoubi 1
- show all...