Svein Arne Brygfjeld


2025

pdf bib
The Impact of Copyrighted Material on Large Language Models: A Norwegian Perspective
Javier de la Rosa | Vladislav Mikhailov | Lemei Zhang | Freddy Wetjen | David Samuel | Peng Liu | Rolv-Arild Braaten | Petter Mæhlum | Magnus Breder Birkenes | Andrey Kutuzov | Tita Enstad | Hans Christian Farsethås | Svein Arne Brygfjeld | Jon Atle Gulla | Stephan Oepen | Erik Velldal | Wilfred Østgulen | Lilja Øvrelid | Aslak Sira Myhre
Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)

The use of copyrighted materials in training language models raises critical legal and ethical questions. This paper presents a framework for and the results of empirically assessing the impact of publisher-controlled copyrighted corpora on the performance of generative large language models (LLMs) for Norwegian. When evaluated on a diverse set of tasks, we found that adding both books and newspapers to the data mixture of LLMs tend to improve their performance, while the addition of fiction works seems to be detrimental. Our experiments could inform the creation of a compensation scheme for authors whose works contribute to AI development.

2021

pdf bib
Operationalizing a National Digital Library: The Case for a Norwegian Transformer Model
Per E Kummervold | Javier De la Rosa | Freddy Wetjen | Svein Arne Brygfjeld
Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)

In this work, we show the process of building a large-scale training set from digital and digitized collections at a national library. The resulting Bidirectional Encoder Representations from Transformers (BERT)-based language model for Norwegian outperforms multilingual BERT (mBERT) models in several token and sequence classification tasks for both Norwegian Bokmål and Norwegian Nynorsk. Our model also improves the mBERT performance for other languages present in the corpus such as English, Swedish, and Danish. For languages not included in the corpus, the weights degrade moderately while keeping strong multilingual properties. Therefore, we show that building high-quality models within a memory institution using somewhat noisy optical character recognition (OCR) content is feasible, and we hope to pave the way for other memory institutions to follow.