Pre-training and Evaluating Transformer-based Language Models for Icelandic

Jón Friðrik Daðason, Hrafn Loftsson


Abstract
In this paper, we evaluate several Transformer-based language models for Icelandic on four downstream tasks: Part-of-Speech tagging, Named Entity Recognition. Dependency Parsing, and Automatic Text Summarization. We pre-train four types of monolingual ELECTRA and ConvBERT models and compare our results to a previously trained monolingual RoBERTa model and the multilingual mBERT model. We find that the Transformer models obtain better results, often by a large margin, compared to previous state-of-the-art models. Furthermore, our results indicate that pre-training larger language models results in a significant reduction in error rates in comparison to smaller models. Finally, our results show that the monolingual models for Icelandic outperform a comparably sized multilingual model.
Anthology ID:
2022.lrec-1.804
Volume:
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
7386–7391
Language:
URL:
https://aclanthology.org/2022.lrec-1.804
DOI:
Bibkey:
Cite (ACL):
Jón Friðrik Daðason and Hrafn Loftsson. 2022. Pre-training and Evaluating Transformer-based Language Models for Icelandic. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 7386–7391, Marseille, France. European Language Resources Association.
Cite (Informal):
Pre-training and Evaluating Transformer-based Language Models for Icelandic (Daðason & Loftsson, LREC 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.lrec-1.804.pdf