Workshop of The UniDive Shared Task on Multilingual Morpho-Syntactic Parsing (2025)


up

pdf (full)
bib (full)
Proceedings of The UniDive 2025 Shared Task on Multilingual Morpho-Syntactic Parsing

pdf bib
Proceedings of The UniDive 2025 Shared Task on Multilingual Morpho-Syntactic Parsing
Omer Goldman | Leonie Weissweiler | Reut Tsarfaty

pdf bib
Findings of the UniDive 2025 shared task on multilingual Morpho-Syntactic Parsing
Omer Goldman | Leonie Weissweiler | Kutay Acar | Diego Alves | Anna Baczkowska | Gulsen Eryigit | Lenka Krippnerová | Adriana Pagano | Tanja Samardžić | Luigi Talamo | Alina Wróblewska | Daniel Zeman | Joakim Nivre | Reut Tsarfay

This paper details the findings of the 2025 UniDive shared task on multilingual morphosyntactic parsing. It introduces a new representation in which morphology and syntax are modelled jointly to form dependency trees of contentful elements, each characterized by features determined by grammatical words and morphemes. This schema allows bypassing the theoretical debate over the definition of “words” and it encourages development of parsers for typologically diverse languages. The data for the task, spanning 9 languages, was annotated based on existing Universal Dependencies (UD) treebanks that were adapted to the new format. We accompany the data with a new metric, MSLAS, that combines syntactic LAS with F1 over grammatical features. The task received two submissions, which together with three baselines give a detailed view on the ability of multi-task encoder models to cope with the task at hand. The best performing system, UM, achieved 78.7 MSLAS macro-averaged over all languages, improving by 31.4 points over the few-shot prompting baseline.

pdf bib
A Joint Multitask Model for Morpho-Syntactic Parsing
Demian Inostroza Améstica | Meladel Mistica | Ekaterina Vylomova | Chris Guest | Kemal Kurniawan

We present a joint multitask model for the Uni-Dive 2025 Morpho-Syntactic Parsing shared task, where systems predict both morphological and syntactic analyses following novel UD annotation scheme. Our system uses a shared XLM-RoBERTa encoder with three specialized decoders for content word identification, dependency parsing, and morphosyntactic feature prediction. Our model achieves the best overall performance on the shared task’s leaderboard covering nine typologically diverse languages, with an average MSLAS score of 78.7%, LAS of 80.1%, and Feats F1 of 90.3%. Our ablation studies show that matching the task’s gold tokenization and content word identification are crucial to model performance. Error analysis reveals that our model struggles with core grammatical cases (particularly Nom–Acc) and nominal features across languages.

pdf bib
Typology-aware Multilingual Morphosyntactic Parsing with Functional Node Filtering
Kutay Acar | Gulsen Eryigit

This paper presents a system for the UniDive Morphosyntactic Parsing (MSP) Shared Task, where it ranked second overall among participating teams. The task introduces a morphosyntactic representation that jointly models syntactic dependencies and morphological features by treating content-bearing elements as graph nodes and encoding functional elements as feature annotations, posing challenges for conventional parsers and necessitating more flexible, linguistically informed approaches. The proposed system combines a typology-aware, multitask parser with a multilingual content/function classifier to handle structural variance across languages. The architecture uses adapter modules and language embeddings to encode typological information. Evaluations across 9 typologically varied languages confirm that the system can accurately replicate both universal and language-specific morphosyntactic patterns.