2024
pdf
bib
abs
Correcting Challenging Finnish Learner Texts With Claude, GPT-3.5 and GPT-4 Large Language Models
Mathias Creutz
Proceedings of the Ninth Workshop on Noisy and User-generated Text (W-NUT 2024)
This paper studies the correction of challenging authentic Finnish learner texts at beginner level (CEFR A1). Three state-of-the-art large language models are compared, and it is shown that GPT-4 outperforms GPT-3.5, which in turn outperforms Claude v1 on this task. Additionally, ensemble models based on classifiers combining outputs of multiple single models are evaluated. The highest accuracy for an ensemble model is 84.3%, whereas the best single model, which is a GPT-4 model, produces sentences that are fully correct 83.3% of the time. In general, the different models perform on a continuum, where grammatical correctness, fluency and coherence go hand in hand.
pdf
bib
abs
Toward the Modular Training of Controlled Paraphrase Adapters
Teemu Vahtola
|
Mathias Creutz
Proceedings of the 1st Workshop on Modular and Open Multilingual NLP (MOOMIN 2024)
Controlled paraphrase generation often focuses on a specific aspect of paraphrasing, for instance syntactically controlled paraphrase generation. However, these models face a limitation: they lack modularity. Consequently adapting them for another aspect, such as lexical variation, needs full retraining of the model each time. To enhance the flexibility in training controlled paraphrase models, our proposition involves incrementally training a modularized system for controlled paraphrase generation for English. We start by fine-tuning a pretrained language model to learn the broad task of paraphrase generation, generally emphasizing meaning preservation and surface form variation. Subsequently, we train a specialized sub-task adapter with limited sub-task specific training data. We can then leverage this adapter in guiding the paraphrase generation process toward a desired output aligning with the distinctive features within the sub-task training data. The preliminary results on comparing the fine-tuned and adapted model against various competing systems indicates that the most successful method for mastering both general paraphrasing skills and task-specific expertise follows a two-stage approach. This approach involves starting with the initial fine-tuning of a generic paraphrase model and subsequently tailoring it for the specific sub-task.
pdf
bib
abs
LLMs’ morphological analyses of complex FST-generated Finnish words
Anssi Moisio
|
Mathias Creutz
|
Mikko Kurimo
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Rule-based language processing systems have been overshadowed by neural systems in terms of utility, but it remains unclear whether neural NLP systems, in practice, learn the grammar rules that humans use. This work aims to shed light on the issue by evaluating state-of-the-art LLMs in a task of morphological analysis of complex Finnish noun forms. We generate the forms using an FST tool, and they are unlikely to have occurred in the training sets of the LLMs, therefore requiring morphological generalisation capacity. We find that GPT-4-turbohas some difficulties in the task while GPT-3.5-turbo struggles and smaller models Llama2-70B and Poro-34B fail nearly completely.
2023
pdf
bib
abs
Guiding Zero-Shot Paraphrase Generation with Fine-Grained Control Tokens
Teemu Vahtola
|
Mathias Creutz
|
Jrg Tiedemann
Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)
Sequence-to-sequence paraphrase generation models often struggle with the generation of diverse paraphrases. This deficiency constrains the viability of leveraging paraphrase generation in different Natural Language Processing tasks. We propose a translation-based guided paraphrase generation model that learns useful features for promoting surface form variation in generated paraphrases from cross-lingual parallel data. Our proposed method leverages multilingual neural machine translation pretraining to learn zero-shot paraphrasing. Furthermore, we incorporate dedicated prefix tokens into the training of the machine translation models to promote variation. The prefix tokens are designed to affect various linguistic features related to surface form realizations, and can be applied during inference to guide the decoding process towards a desired solution. We assess the proposed guided model on paraphrase generation in three languages, English, Finnish, and Swedish, and provide analysis on the feasibility of the prefix tokens to guided paraphrasing. Our analysis suggests that the attributes represented by the prefix tokens are useful in promoting variation, by pushing the paraphrases generated by the guided model to diverge from the input sentence while preserving semantics conveyed by the sentence well.
pdf
bib
abs
On using distribution-based compositionality assessment to evaluate compositional generalisation in machine translation
Anssi Moisio
|
Mathias Creutz
|
Mikko Kurimo
Proceedings of the 1st GenBench Workshop on (Benchmarking) Generalisation in NLP
Compositional generalisation (CG), in NLP and in machine learning more generally, has been assessed mostly using artificial datasets. It is important to develop benchmarks to assess CG also in real-world natural language tasks in order to understand the abilities and limitations of systems deployed in the wild. To this end, our GenBench Collaborative Benchmarking Task submission utilises the distribution-based compositionality assessment (DBCA) framework to split the Europarl translation corpus into a training and a test set in such a way that the test set requires compositional generalisation capacity. Specifically, the training and test sets have divergent distributions of dependency relations, testing NMT systems’ capability of translating dependencies that they have not been trained on. This is a fully-automated procedure to create natural language compositionality benchmarks, making it simple and inexpensive to apply it further to other datasets and languages. The code and data for the experiments is available at https://github.com/aalto-speech/dbca.
pdf
bib
abs
Evaluating Morphological Generalisation in Machine Translation by Distribution-Based Compositionality Assessment
Anssi Moisio
|
Mathias Creutz
|
Mikko Kurimo
Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)
Compositional generalisation refers to the ability to understand and generate a potentially infinite number of novel meanings using a finite group of known primitives and a set of rules to combine them. The degree to which artificial neural networks can learn this ability is an open question. Recently, some evaluation methods and benchmarks have been proposed to test compositional generalisation, but not many have focused on the morphological level of language. We propose an application of the previously developed distribution-based compositionality assessment method to assess morphological generalisation in NLP tasks, such as machine translation or paraphrase detection. We demonstrate the use of our method by comparing translation systems with different BPE vocabulary sizes. The evaluation method we propose suggests that small vocabularies help with morphological generalisation in NMT.
2022
pdf
bib
abs
It Is Not Easy To Detect Paraphrases: Analysing Semantic Similarity With Antonyms and Negation Using the New SemAntoNeg Benchmark
Teemu Vahtola
|
Mathias Creutz
|
Jörg Tiedemann
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
We investigate to what extent a hundred publicly available, popular neural language models capture meaning systematically. Sentence embeddings obtained from pretrained or fine-tuned language models can be used to perform particular tasks, such as paraphrase detection, semantic textual similarity assessment or natural language inference. Common to all of these tasks is that paraphrastic sentences, that is, sentences that carry (nearly) the same meaning, should have (nearly) the same embeddings regardless of surface form. We demonstrate that performance varies greatly across different language models when a specific type of meaning-preserving transformation is applied: two sentences should be identified as paraphrastic if one of them contains a negated antonym in relation to the other one, such as “I am not guilty” versus “I am innocent”.We introduce and release SemAntoNeg, a new test suite containing 3152 entries for probing paraphrasticity in sentences incorporating negation and antonyms. Among other things, we show that language models fine-tuned for natural language inference outperform other types of models, especially the ones fine-tuned to produce general-purpose sentence embeddings, on the test suite. Furthermore, we show that most models designed explicitly for paraphrasing are rather mediocre in our task.
pdf
bib
abs
GEMv2: Multilingual NLG Benchmarking in a Single Line of Code
Sebastian Gehrmann
|
Abhik Bhattacharjee
|
Abinaya Mahendiran
|
Alex Wang
|
Alexandros Papangelis
|
Aman Madaan
|
Angelina Mcmillan-major
|
Anna Shvets
|
Ashish Upadhyay
|
Bernd Bohnet
|
Bingsheng Yao
|
Bryan Wilie
|
Chandra Bhagavatula
|
Chaobin You
|
Craig Thomson
|
Cristina Garbacea
|
Dakuo Wang
|
Daniel Deutsch
|
Deyi Xiong
|
Di Jin
|
Dimitra Gkatzia
|
Dragomir Radev
|
Elizabeth Clark
|
Esin Durmus
|
Faisal Ladhak
|
Filip Ginter
|
Genta Indra Winata
|
Hendrik Strobelt
|
Hiroaki Hayashi
|
Jekaterina Novikova
|
Jenna Kanerva
|
Jenny Chim
|
Jiawei Zhou
|
Jordan Clive
|
Joshua Maynez
|
João Sedoc
|
Juraj Juraska
|
Kaustubh Dhole
|
Khyathi Raghavi Chandu
|
Laura Perez Beltrachini
|
Leonardo F . R. Ribeiro
|
Lewis Tunstall
|
Li Zhang
|
Mahim Pushkarna
|
Mathias Creutz
|
Michael White
|
Mihir Sanjay Kale
|
Moussa Kamal Eddine
|
Nico Daheim
|
Nishant Subramani
|
Ondrej Dusek
|
Paul Pu Liang
|
Pawan Sasanka Ammanamanchi
|
Qi Zhu
|
Ratish Puduppully
|
Reno Kriz
|
Rifat Shahriyar
|
Ronald Cardenas
|
Saad Mahamood
|
Salomey Osei
|
Samuel Cahyawijaya
|
Sanja Štajner
|
Sebastien Montella
|
Shailza Jolly
|
Simon Mille
|
Tahmid Hasan
|
Tianhao Shen
|
Tosin Adewumi
|
Vikas Raunak
|
Vipul Raheja
|
Vitaly Nikolaev
|
Vivian Tsai
|
Yacine Jernite
|
Ying Xu
|
Yisi Sang
|
Yixin Liu
|
Yufang Hou
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Evaluations in machine learning rarely use the latest metrics, datasets, or human evaluation in favor of remaining compatible with prior work. The compatibility, often facilitated through leaderboards, thus leads to outdated but standardized evaluation practices. We pose that the standardization is taking place in the wrong spot. Evaluation infrastructure should enable researchers to use the latest methods and what should be standardized instead is how to incorporate these new evaluation advances. We introduce GEMv2, the new version of the Generation, Evaluation, and Metrics Benchmark which uses a modular infrastructure for dataset, model, and metric developers to benefit from each other’s work. GEMv2 supports 40 documented datasets in 51 languages, ongoing online evaluation for all datasets, and our interactive tools make it easier to add new datasets to the living benchmark.
pdf
bib
abs
Morfessor-enriched features and multilingual training for canonical morphological segmentation
Aku Rouhe
|
Stig-Arne Grönroos
|
Sami Virpioja
|
Mathias Creutz
|
Mikko Kurimo
Proceedings of the 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
In our submission to the SIGMORPHON 2022 Shared Task on Morpheme Segmentation, we study whether an unsupervised morphological segmentation method, Morfessor, can help in a supervised setting. Previous research has shown the effectiveness of the approach in semisupervised settings with small amounts of labeled data. The current tasks vary in data size: the amount of word-level annotated training data is much larger, but the amount of sentencelevel annotated training data remains small. Our approach is to pre-segment the input data for a neural sequence-to-sequence model with the unsupervised method. As the unsupervised method can be trained with raw text data, we use Wikipedia to increase the amount of training data. In addition, we train multilingual models for the sentence-level task. The results for the Morfessor-enriched features are mixed, showing benefit for all three sentencelevel tasks but only some of the word-level tasks. The multilingual training yields considerable improvements over the monolingual sentence-level models, but it negates the effect of the enriched features.
pdf
bib
abs
Modeling Noise in Paraphrase Detection
Teemu Vahtola
|
Eetu Sjöblom
|
Jörg Tiedemann
|
Mathias Creutz
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Noisy labels in training data present a challenging issue in classification tasks, misleading a model towards incorrect decisions during training. In this paper, we propose the use of a linear noise model to augment pre-trained language models to account for label noise in fine-tuning. We test our approach in a paraphrase detection task with various levels of noise and five different languages. Our experiments demonstrate the effectiveness of the additional noise model in making the training procedures more robust and stable. Furthermore, we show that this model can be applied without further knowledge about annotation confidence and reliability of individual training examples and we analyse our results in light of data selection and sampling strategies.
pdf
bib
abs
Helsinki-NLP at SemEval-2022 Task 2: A Feature-Based Approach to Multilingual Idiomaticity Detection
Sami Itkonen
|
Jörg Tiedemann
|
Mathias Creutz
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
This paper describes the University of Helsinki submission to the SemEval 2022 task on multilingual idiomaticity detection. Our system utilizes several models made available by HuggingFace, along with the baseline BERT model for the task. We focus on feature engineering based on properties that typically characterize idiomatic expressions. The additional features lead to improvements over the baseline and the final submission achieves 15th place out of 20 submissions. The paper provides error analysis of our model including visualisations of the contributions of individual features.
pdf
bib
abs
A Closer Look at Parameter Contributions When Training Neural Language and Translation Models
Raúl Vázquez
|
Hande Celikkanat
|
Vinit Ravishankar
|
Mathias Creutz
|
Jörg Tiedemann
Proceedings of the 29th International Conference on Computational Linguistics
We analyze the learning dynamics of neural language and translation models using Loss Change Allocation (LCA), an indicator that enables a fine-grained analysis of parameter updates when optimizing for the loss function. In other words, we can observe the contributions of different network components at training time. In this article, we systematically study masked language modeling, causal language modeling, and machine translation. We show that the choice of training objective leads to distinctive optimization procedures, even when performed on comparable Transformer architectures. We demonstrate how the various Transformer parameters are used during training, supporting that the feed-forward components of each layer are the main contributors to the optimization procedure. Finally, we find that the learning dynamics are not affected by data size and distribution but rather determined by the learning objective.
2021
pdf
bib
abs
On the differences between BERT and MT encoder spaces and how to address them in translation tasks
Raúl Vázquez
|
Hande Celikkanat
|
Mathias Creutz
|
Jörg Tiedemann
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop
Various studies show that pretrained language models such as BERT cannot straightforwardly replace encoders in neural machine translation despite their enormous success in other tasks. This is even more astonishing considering the similarities between the architectures. This paper sheds some light on the embedding spaces they create, using average cosine similarity, contextuality metrics and measures for representational similarity for comparison, revealing that BERT and NMT encoder representations look significantly different from one another. In order to address this issue, we propose a supervised transformation from one into the other using explicit alignment and fine-tuning. Our results demonstrate the need for such a transformation to improve the applicability of BERT in MT.
pdf
bib
abs
Grammatical Error Generation Based on Translated Fragments
Eetu Sjöblom
|
Mathias Creutz
|
Teemu Vahtola
Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)
We perform neural machine translation of sentence fragments in order to create large amounts of training data for English grammatical error correction. Our method aims at simulating mistakes made by second language learners, and produces a wider range of non-native style language in comparison to a state-of-the-art baseline model. We carry out quantitative and qualitative evaluation. Our method is shown to outperform the baseline on data with a high proportion of errors.
pdf
bib
abs
Coping with Noisy Training Data Labels in Paraphrase Detection
Teemu Vahtola
|
Mathias Creutz
|
Eetu Sjöblom
|
Sami Itkonen
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)
We present new state-of-the-art benchmarks for paraphrase detection on all six languages in the Opusparcus sentential paraphrase corpus: English, Finnish, French, German, Russian, and Swedish. We reach these baselines by fine-tuning BERT. The best results are achieved on smaller and cleaner subsets of the training sets than was observed in previous research. Additionally, we study a translation-based approach that is competitive for the languages with more limited and noisier training data.
pdf
bib
abs
An Empirical Investigation of Word Alignment Supervision for Zero-Shot Multilingual Neural Machine Translation
Alessandro Raganato
|
Raúl Vázquez
|
Mathias Creutz
|
Jörg Tiedemann
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Zero-shot translations is a fascinating feature of Multilingual Neural Machine Translation (MNMT) systems. These MNMT models are usually trained on English-centric data, i.e. English either as the source or target language, and with a language label prepended to the input indicating the target language. However, recent work has highlighted several flaws of these models in zero-shot scenarios where language labels are ignored and the wrong language is generated or different runs show highly unstable results. In this paper, we investigate the benefits of an explicit alignment to language labels in Transformer-based MNMT models in the zero-shot context, by jointly training one cross attention head with word alignment supervision to stress the focus on the target language label. We compare and evaluate several MNMT systems on three multilingual MT benchmarks of different sizes, showing that simply supervising one cross attention head to focus both on word alignments and language labels reduces the bias towards translating into the wrong language, improving the zero-shot performance overall. Moreover, as an additional advantage, we find that our alignment supervision leads to more stable results across different training runs.
2020
pdf
bib
abs
Paraphrase Generation and Evaluation on Colloquial-Style Sentences
Eetu Sjöblom
|
Mathias Creutz
|
Yves Scherrer
Proceedings of the Twelfth Language Resources and Evaluation Conference
In this paper, we investigate paraphrase generation in the colloquial domain. We use state-of-the-art neural machine translation models trained on the Opusparcus corpus to generate paraphrases in six languages: German, English, Finnish, French, Russian, and Swedish. We perform experiments to understand how data selection and filtering for diverse paraphrase pairs affects the generated paraphrases. We compare two different model architectures, an RNN and a Transformer model, and find that the Transformer does not generally outperform the RNN. We also conduct human evaluation on five of the six languages and compare the results to the automatic evaluation metrics BLEU and the recently proposed BERTScore. The results advance our understanding of the trade-offs between the quality and novelty of generated paraphrases, affected by the data selection method. In addition, our comparison of the evaluation methods shows that while BLEU correlates well with human judgments at the corpus level, BERTScore outperforms BLEU in both corpus and sentence-level evaluation.
pdf
bib
abs
A Systematic Study of Inner-Attention-Based Sentence Representations in Multilingual Neural Machine Translation
Raúl Vázquez
|
Alessandro Raganato
|
Mathias Creutz
|
Jörg Tiedemann
Computational Linguistics, Volume 46, Issue 2 - June 2020
Neural machine translation has considerably improved the quality of automatic translations by learning good representations of input sentences. In this article, we explore a multilingual translation model capable of producing fixed-size sentence representations by incorporating an intermediate crosslingual shared layer, which we refer to as attention bridge. This layer exploits the semantics from each language and develops into a language-agnostic meaning representation that can be efficiently used for transfer learning. We systematically study the impact of the size of the attention bridge and the effect of including additional languages in the model. In contrast to related previous work, we demonstrate that there is no conflict between translation performance and the use of sentence representations in downstream tasks. In particular, we show that larger intermediate layers not only improve translation quality, especially for long sentences, but also push the accuracy of trainable classification tasks. Nevertheless, shorter representations lead to increased compression that is beneficial in non-trainable similarity tasks. Similarly, we show that trainable downstream tasks benefit from multilingual models, whereas additional language signals do not improve performance in non-trainable benchmarks. This is an important insight that helps to properly design models for specific applications. Finally, we also include an in-depth analysis of the proposed attention bridge and its ability to encode linguistic properties. We carefully analyze the information that is captured by individual attention heads and identify interesting patterns that explain the performance of specific settings in linguistic probing tasks.
2019
pdf
bib
abs
An Evaluation of Language-Agnostic Inner-Attention-Based Representations in Machine Translation
Alessandro Raganato
|
Raúl Vázquez
|
Mathias Creutz
|
Jörg Tiedemann
Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)
In this paper, we explore a multilingual translation model with a cross-lingually shared layer that can be used as fixed-size sentence representation in different downstream tasks. We systematically study the impact of the size of the shared layer and the effect of including additional languages in the model. In contrast to related previous work, we demonstrate that the performance in translation does correlate with trainable downstream tasks. In particular, we show that larger intermediate layers not only improve translation quality, especially for long sentences, but also push the accuracy of trainable classification tasks. On the other hand, shorter representations lead to increased compression that is beneficial in non-trainable similarity tasks. We hypothesize that the training procedure on the downstream task enables the model to identify the encoded information that is useful for the specific task whereas non-trainable benchmarks can be confused by other types of information also encoded in the representation of a sentence.
pdf
bib
abs
Multilingual NMT with a Language-Independent Attention Bridge
Raúl Vázquez
|
Alessandro Raganato
|
Jörg Tiedemann
|
Mathias Creutz
Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)
In this paper, we propose an architecture for machine translation (MT) capable of obtaining multilingual sentence representations by incorporating an intermediate attention bridge that is shared across all languages. We train the model with language-specific encoders and decoders that are connected through an inner-attention layer on the encoder side. The attention bridge exploits the semantics from each language for translation and develops into a language-agnostic meaning representation that can efficiently be used for transfer learning. We present a new framework for the efficient development of multilingual neural machine translation (NMT) using this model and scheduled training. We have tested the approach in a systematic way with a multi-parallel data set. The model achieves substantial improvements over strong bilingual models and performs well for zero-shot translation, which demonstrates its ability of abstraction and transfer learning.
pdf
bib
Toward automatic improvement of language produced by non-native language learners
Mathias Creutz
|
Eetu Sjöblom
Proceedings of the 8th Workshop on NLP for Computer Assisted Language Learning
2018
pdf
bib
Open Subtitles Paraphrase Corpus for Six Languages
Mathias Creutz
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
pdf
bib
abs
Paraphrase Detection on Noisy Subtitles in Six Languages
Eetu Sjöblom
|
Mathias Creutz
|
Mikko Aulamo
Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text
We perform automatic paraphrase detection on subtitle data from the Opusparcus corpus comprising six European languages: German, English, Finnish, French, Russian, and Swedish. We train two types of supervised sentence embedding models: a word-averaging (WA) model and a gated recurrent averaging network (GRAN) model. We find out that GRAN outperforms WA and is more robust to noisy training data. Better results are obtained with more and noisier data than less and cleaner data. Additionally, we experiment on other datasets, without reaching the same level of performance, because of domain mismatch between training and test data.
2009
pdf
bib
Web Augmentation of Language Models for Continuous Speech Recognition of SMS Text Messages
Mathias Creutz
|
Sami Virpioja
|
Anna Kovaleva
Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)
2008
pdf
bib
Speech to speech machine translation: Biblical chatter from Finnish to English
David Ellis
|
Mathias Creutz
|
Timo Honkela
|
Mikko Kurimo
Proceedings of the IJCNLP-08 Workshop on NLP for Less Privileged Languages
2007
pdf
bib
Analysis of Morph-Based Speech Recognition and the Modeling of Out-of-Vocabulary Words Across Languages
Mathias Creutz
|
Teemu Hirsimäki
|
Mikko Kurimo
|
Antti Puurula
|
Janne Pylkkönen
|
Vesa Siivola
|
Matti Varjokallio
|
Ebru Arisoy
|
Murat Saraçlar
|
Andreas Stolcke
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference
pdf
bib
Morphology-aware statistical machine translation based on morphs induced in an unsupervised manner
Sami Virpioja
|
Jaako J. Väyrynen
|
Mathias Creutz
|
Markus Sadeniemi
Proceedings of Machine Translation Summit XI: Papers
2004
pdf
bib
Induction of a Simple Morphology for Highly-Inflecting Languages
Mathias Creutz
|
Krista Lagus
Proceedings of the 7th Meeting of the ACL Special Interest Group in Computational Phonology: Current Themes in Computational Phonology and Morphology
2003
pdf
bib
Unsupervised Segmentation of Words Using Prior Distributions of Morph Length and Frequency
Mathias Creutz
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics
2002
pdf
bib
Unsupervised Discovery of Morphemes
Mathias Creutz
|
Krista Lagus
Proceedings of the ACL-02 Workshop on Morphological and Phonological Learning