Daniel Holmer


2023

pdf bib
Constructing Pseudo-parallel Swedish Sentence Corpora for Automatic Text Simplification
Daniel Holmer | Evelina Rennes
Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)

Automatic text simplification (ATS) describes the automatic transformation of a text from a complex form to a less complex form. Many modern ATS techniques need large parallel corpora of standard and simplified text, but such data does not exist for many languages. One way to overcome this issue is to create pseudo-parallel corpora by dividing existing corpora into standard and simple parts. In this work, we explore the creation of Swedish pseudo-parallel monolingual corpora by the application of different feature representation methods, sentence alignment algorithms, and indexing approaches, on a large monolingual corpus. The different corpora are used to fine-tune a sentence simplification system based on BART, which is evaluated with standard evaluation metrics for automatic text simplification.

pdf bib
Who said what? Speaker Identification from Anonymous Minutes of Meetings
Daniel Holmer | Lars Ahrenberg | Julius Monsen | Arne Jönsson | Mikael Apel | Marianna Grimaldi
Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)

We study the performance of machine learning techniques to the problem of identifying speakers at meetings from anonymous minutes issued afterwards. The data comes from board meetings of Sveriges Riksbank (Sweden’s Central Bank). The data is split in two ways, one where each reported contribution to the discussion is treated as a data point, and another where all contributions from a single speaker have been aggregated. Using interpretable models we find that lexical features and topic models generated from speeches held by the board members outside of board meetings are good predictors of speaker identity. Combining topic models with other features gives prediction accuracies close to 80% on aggregated data, though there is still a sizeable gap in performance compared to a not easily interpreted BERT-based transformer model that we offer as a benchmark.

2022

pdf bib
NyLLex: A Novel Resource of Swedish Words Annotated with Reading Proficiency Level
Daniel Holmer | Evelina Rennes
Proceedings of the Thirteenth Language Resources and Evaluation Conference

What makes a text easy to read or not, depends on a variety of factors. One of the most prominent is, however, if the text contains easy, and avoids difficult, words. Deciding if a word is easy or difficult is not a trivial task, since it depends on characteristics of the word in itself as well as the reader, but it can be facilitated by the help of a corpus annotated with word frequencies and reading proficiency levels. In this paper, we present NyLLex, a novel lexical resource derived from books published by Sweden’s largest publisher for easy language texts. NyLLex consists of 6,668 entries, with frequency counts distributed over six reading proficiency levels. We show that NyLLex, with its novel source material aimed at individuals of different reading proficiency levels, can serve as a complement to already existing resources for Swedish.