Less is Enough: Less-Resourced Multilingual AMR Parsing

Bram Vanroy, Tim Van de Cruys


Abstract
This paper investigates the efficacy of multilingual models for the task of text-to-AMR parsing, focusing on English, Spanish, and Dutch. We train and evaluate models under various configurations, including monolingual and multilingual settings, both in full and reduced data scenarios. Our empirical results reveal that while monolingual models exhibit superior performance, multilingual models are competitive across all languages, offering a more resource-efficient alternative for training and deployment. Crucially, our findings demonstrate that AMR parsing benefits from transfer learning across languages even when having access to significantly smaller datasets. As a tangible contribution, we provide text-to-AMR parsing models for the aforementioned languages as well as multilingual variants, and make available the large corpora of translated data for Dutch, Spanish (and Irish) that we used for training them in order to foster AMR research in non-English languages. Additionally, we open-source the training code and offer an interactive interface for parsing AMR graphs from text.
Anthology ID:
2024.isa-1.11
Volume:
Proceedings of the 20th Joint ACL - ISO Workshop on Interoperable Semantic Annotation @ LREC-COLING 2024
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Harry Bunt, Nancy Ide, Kiyong Lee, Volha Petukhova, James Pustejovsky, Laurent Romary
Venues:
ISA | WS
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
82–92
Language:
URL:
https://aclanthology.org/2024.isa-1.11
DOI:
Bibkey:
Cite (ACL):
Bram Vanroy and Tim Van de Cruys. 2024. Less is Enough: Less-Resourced Multilingual AMR Parsing. In Proceedings of the 20th Joint ACL - ISO Workshop on Interoperable Semantic Annotation @ LREC-COLING 2024, pages 82–92, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Less is Enough: Less-Resourced Multilingual AMR Parsing (Vanroy & Van de Cruys, ISA-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.isa-1.11.pdf