BERT-based Annotation of Oral Texts Elicited via Multilingual Assessment Instrument for Narratives

Timo Baumann, Korbinian Eller, Natalia Gagarina


Abstract
We investigate how NLP can help annotate the structure and complexity of oral narrative texts elicited via the Multilingual Assessment Instrument for Narratives (MAIN). MAIN is a theory-based tool designed to evaluate the narrative abilities of children who are learning one or more languages from birth or early in their development. It provides a standardized way to measure how well children can comprehend and produce stories across different languages and referential norms for children between 3 and 12 years old. MAIN has been adapted to over ninety languages and is used in over 65 countries. The MAIN analysis focuses on story structure and story complexity which are typically evaluated manually based on scoring sheets. We here investigate the automation of this process using BERT-based classification which already yields promising results.
Anthology ID:
2024.wnu-1.16
Volume:
Proceedings of the The 6th Workshop on Narrative Understanding
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yash Kumar Lal, Elizabeth Clark, Mohit Iyyer, Snigdha Chaturvedi, Anneliese Brei, Faeze Brahman, Khyathi Raghavi Chandu
Venue:
WNU
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
99–104
Language:
URL:
https://aclanthology.org/2024.wnu-1.16
DOI:
Bibkey:
Cite (ACL):
Timo Baumann, Korbinian Eller, and Natalia Gagarina. 2024. BERT-based Annotation of Oral Texts Elicited via Multilingual Assessment Instrument for Narratives. In Proceedings of the The 6th Workshop on Narrative Understanding, pages 99–104, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
BERT-based Annotation of Oral Texts Elicited via Multilingual Assessment Instrument for Narratives (Baumann et al., WNU 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.wnu-1.16.pdf