Matīss Rikters

Also published as: Matiss Rikters


2024

pdf bib
Entity-aware Multi-task Training Helps Rare Word Machine Translation
Matiss Rikters | Makoto Miwa
Proceedings of the 17th International Natural Language Generation Conference

Named entities (NE) are integral for preserving context and conveying accurate information in the machine translation (MT) task. Challenges often lie in handling NE diversity, ambiguity, rarity, and ensuring alignment and consistency. In this paper, we explore the effect of NE-aware model fine-tuning to improve handling of NEs in MT. We generate data for NE recognition (NER) and NE-aware MT using common NER tools from Spacy, and align entities in parallel data. Experiments with fine-tuning variations of pre-trained T5 models on NE-related generation tasks between English and German show promising results with increasing amounts of NEs in the output and BLEU score improvements compared to the non-tuned baselines.

pdf bib
AIST AIRC Systems for the WMT 2024 Shared Tasks
Matiss Rikters | Makoto Miwa
Proceedings of the Ninth Conference on Machine Translation

At WMT 2024 AIST AIRC participated in the General Machine Translation shared task and the Biomedical Translation task. We trained constrained track models for translation between English, German, and Japanese. Before training the final models, we first filtered the parallel data, then performed iterative back-translation as well as parallel data distillation. We experimented with training baseline Transformer models, Mega models, and fine-tuning open-source T5 and Gemma model checkpoints using the filtered parallel data. Our primary submissions contain translations from ensembles of two Mega model checkpoints and our contrastive submissions are generated by our fine-tuned T5 model checkpoints.

pdf bib
Annotations for Exploring Food Tweets from Multiple Aspects
Matiss Rikters | Rinalds Vīksna | Edison Marrese-Taylor
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

This research builds upon the Latvian Twitter Eater Corpus (LTEC), which is focused on the narrow domain of tweets related to food, drinks, eating and drinking. LTEC has been collected for more than 12 years and reaching almost 3 million tweets with the basic information as well as extended automatically and manually annotated metadata. In this paper we supplement the LTEC with manually annotated subsets of evaluation data for machine translation, named entity recognition, timeline-balanced sentiment analysis, and text-image relation classification. We experiment with each of the data sets using baseline models and highlight future challenges for various modelling approaches.

pdf bib
Revisiting Context Choices for Context-aware Machine Translation
Matiss Rikters | Toshiaki Nakazawa
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

One of the most popular methods for context-aware machine translation (MT) is to use separate encoders for the source sentence and context as multiple sources for one target sentence. Recent work has cast doubt on whether these models actually learn useful signals from the context or are improvements in automatic evaluation metrics just a side-effect. We show that multi-source transformer models improve MT over standard transformer-base models even with empty lines provided as context, but the translation quality improves significantly (1.51 - 2.65 BLEU) when a sufficient amount of correct context is provided. We also show that even though randomly shuffling in-domain context can also improve over baselines, the correct context further improves translation quality and random out-of-domain context further degrades it.

2023

pdf bib
The Future of Meat: Sentiment Analysis of Food Tweets
Matiss Rikters | Maija Kāle
Proceedings of the 11th International Workshop on Natural Language Processing for Social Media

pdf bib
AIST AIRC Submissions to the WMT23 Shared Task
Matiss Rikters | Makoto Miwa
Proceedings of the Eighth Conference on Machine Translation

This paper describes the development process of NMT systems that were submitted to the WMT 2023 General Translation task by the team of AIST AIRC. We trained constrained track models for translation between English, German, and Japanese. Before training the final models, we first filtered the parallel and monolingual data, then performed iterative back-translation as well as parallel data distillation to be used for non-autoregressive model training. We experimented with training Transformer models, Mega models, and custom non-autoregressive sequence-to-sequence models with encoder and decoder weights initialised by a multilingual BERT base. Our primary submissions contain translations from ensembles of two Mega model checkpoints and our contrastive submissions are generated by our non-autoregressive models.

2022

pdf bib
Machine Translation for Livonian: Catering to 20 Speakers
Matīss Rikters | Marili Tomingas | Tuuli Tuisk | Valts Ernštreits | Mark Fishel
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Livonian is one of the most endangered languages in Europe with just a tiny handful of speakers and virtually no publicly available corpora. In this paper we tackle the task of developing neural machine translation (NMT) between Livonian and English, with a two-fold aim: on one hand, preserving the language and on the other – enabling access to Livonian folklore, lifestories and other textual intangible heritage as well as making it easier to create further parallel corpora. We rely on Livonian’s linguistic similarity to Estonian and Latvian and collect parallel and monolingual data for the four languages for translation experiments. We combine different low-resource NMT techniques like zero-shot translation, cross-lingual transfer and synthetic data creation to reach the highest possible translation quality as well as to find which base languages are empirically more helpful for transfer to Livonian. The resulting NMT systems and the collected monolingual and parallel data, including a manually translated and verified translation benchmark, are publicly released via OPUS and Huggingface repositories.

2020

pdf bib
Document-aligned Japanese-English Conversation Parallel Corpus
Matīss Rikters | Ryokan Ri | Tong Li | Toshiaki Nakazawa
Proceedings of the Fifth Conference on Machine Translation

Sentence-level (SL) machine translation (MT) has reached acceptable quality for many high-resourced languages, but not document-level (DL) MT, which is difficult to 1) train with little amount of DL data; and 2) evaluate, as the main methods and data sets focus on SL evaluation. To address the first issue, we present a document-aligned Japanese-English conversation corpus, including balanced, high-quality business conversation data for tuning and testing. As for the second issue, we manually identify the main areas where SL MT fails to produce adequate translations in lack of context. We then create an evaluation set where these phenomena are annotated to alleviate automatic evaluation of DL systems. We train MT models using our corpus to demonstrate how using context leads to improvements.

pdf bib
Customized Neural Machine Translation Systems for the Swiss Legal Domain
Rubén Martínez-Domínguez | Matīss Rikters | Artūrs Vasiļevskis | Mārcis Pinnis | Paula Reichenberg
Proceedings of the 14th Conference of the Association for Machine Translation in the Americas (Volume 2: User Track)

pdf bib
The University of Tokyo’s Submissions to the WAT 2020 Shared Task
Matīss Rikters | Toshiaki Nakazawa | Ryokan Ri
Proceedings of the 7th Workshop on Asian Translation

The paper describes the development process of the The University of Tokyo’s NMT systems that were submitted to the WAT 2020 Document-level Business Scene Dialogue Translation sub-task. We describe the data processing workflow, NMT system training architectures, and automatic evaluation results. For the WAT 2020 shared task, we submitted 12 systems (both constrained and unconstrained) for English-Japanese and Japanese-English translation directions. The submitted systems were trained using Transformer models and one was a SMT baseline.

2019

pdf bib
Designing the Business Conversation Corpus
Matīss Rikters | Ryokan Ri | Tong Li | Toshiaki Nakazawa
Proceedings of the 6th Workshop on Asian Translation

While the progress of machine translation of written text has come far in the past several years thanks to the increasing availability of parallel corpora and corpora-based training technologies, automatic translation of spoken text and dialogues remains challenging even for modern systems. In this paper, we aim to boost the machine translation quality of conversational texts by introducing a newly constructed Japanese-English business conversation parallel corpus. A detailed analysis of the corpus is provided along with challenging examples for automatic translation. We also experiment with adding the corpus in a machine translation training scenario and show how the resulting system benefits from its use.

pdf bib
Tilde’s Machine Translation Systems for WMT 2019
Marcis Pinnis | Rihards Krišlauks | Matīss Rikters
Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)

The paper describes the development process of Tilde’s NMT systems for the WMT 2019 shared task on news translation. We trained systems for the English-Lithuanian and Lithuanian-English translation directions in constrained and unconstrained tracks. We build upon the best methods of the previous year’s competition and combine them with recent advancements in the field. We also present a new method to ensure source domain adherence in back-translated data. Our systems achieved a shared first place in human evaluation.

2018

pdf bib
Training and Adapting Multilingual NMT for Less-resourced and Morphologically Rich Languages
Matīss Rikters | Mārcis Pinnis | Rihards Krišlauks
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf bib
Tilde’s Machine Translation Systems for WMT 2018
Mārcis Pinnis | Matīss Rikters | Rihards Krišlauks
Proceedings of the Third Conference on Machine Translation: Shared Task Papers

The paper describes the development process of the Tilde’s NMT systems that were submitted for the WMT 2018 shared task on news translation. We describe the data filtering and pre-processing workflows, the NMT system training architectures, and automatic evaluation results. For the WMT 2018 shared task, we submitted seven systems (both constrained and unconstrained) for English-Estonian and Estonian-English translation directions. The submitted systems were trained using Transformer models.

2017

pdf bib
Paying Attention to Multi-Word Expressions in Neural Machine Translation
Matīss Rikters | Ondřej Bojar
Proceedings of Machine Translation Summit XVI: Research Track

pdf bib
Confidence through Attention
Matīss Rikters | Mark Fishel
Proceedings of Machine Translation Summit XVI: Research Track

pdf bib
C-3MA: Tartu-Riga-Zurich Translation Systems for WMT17
Matīss Rikters | Chantal Amrhein | Maksym Del | Mark Fishel
Proceedings of the Second Conference on Machine Translation

2016

pdf bib
Neural Network Language Models for Candidate Scoring in Hybrid Multi-System Machine Translation
Matīss Rikters
Proceedings of the Sixth Workshop on Hybrid Approaches to Translation (HyTra6)

This paper presents the comparison of how using different neural network based language modeling tools for selecting the best candidate fragments affects the final output translation quality in a hybrid multi-system machine translation setup. Experiments were conducted by comparing perplexity and BLEU scores on common test cases using the same training data set. A 12-gram statistical language model was selected as a baseline to oppose three neural network based models of different characteristics. The models were integrated in a hybrid system that depends on the perplexity score of a sentence fragment to produce the best fitting translations. The results show a correlation between language model perplexity and BLEU scores as well as overall improvements in BLEU.

pdf bib
Syntax-based Multi-system Machine Translation
Matīss Rikters | Inguna Skadiņa
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

This paper describes a hybrid machine translation system that explores a parser to acquire syntactic chunks of a source sentence, translates the chunks with multiple online machine translation (MT) system application program interfaces (APIs) and creates output by combining translated chunks to obtain the best possible translation. The selection of the best translation hypothesis is performed by calculating the perplexity for each translated chunk. The goal of this approach is to enhance the baseline multi-system hybrid translation (MHyT) system that uses only a language model to select best translation from translations obtained with different APIs and to improve overall English ― Latvian machine translation quality over each of the individual MT APIs. The presented syntax-based multi-system translation (SyMHyT) system demonstrates an improvement in terms of BLEU and NIST scores compared to the baseline system. Improvements reach from 1.74 up to 2.54 BLEU points.

2015

pdf bib
Multi-system machine translation using online APIs for English-Latvian
Matīss Rikters
Proceedings of the Fourth Workshop on Hybrid Approaches to Translation (HyTra)