2024
pdf
bib
abs
FINDINGS OF THE IWSLT 2024 EVALUATION CAMPAIGN
Ibrahim Said Ahmad
|
Antonios Anastasopoulos
|
Ondřej Bojar
|
Claudia Borg
|
Marine Carpuat
|
Roldano Cattoni
|
Mauro Cettolo
|
William Chen
|
Qianqian Dong
|
Marcello Federico
|
Barry Haddow
|
Dávid Javorský
|
Mateusz Krubiński
|
Tsz Kin Lam
|
Xutai Ma
|
Prashant Mathur
|
Evgeny Matusov
|
Chandresh Maurya
|
John McCrae
|
Kenton Murray
|
Satoshi Nakamura
|
Matteo Negri
|
Jan Niehues
|
Xing Niu
|
Atul Kr. Ojha
|
John Ortega
|
Sara Papi
|
Peter Polák
|
Adam Pospíšil
|
Pavel Pecina
|
Elizabeth Salesky
|
Nivedita Sethiya
|
Balaram Sarkar
|
Jiatong Shi
|
Claytone Sikasote
|
Matthias Sperber
|
Sebastian Stüker
|
Katsuhito Sudoh
|
Brian Thompson
|
Alex Waibel
|
Shinji Watanabe
|
Patrick Wilken
|
Petr Zemánek
|
Rodolfo Zevallos
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)
This paper reports on the shared tasks organized by the 21st IWSLT Conference. The shared tasks address 7 scientific challenges in spoken language translation: simultaneous and offline translation, automatic subtitling and dubbing, speech-to-speech translation, dialect and low-resource speech translation, and Indic languages. The shared tasks attracted 17 teams whose submissions are documented in 27 system papers. The growing interest towards spoken language translation is also witnessed by the constantly increasing number of shared task organizers and contributors to the overview paper, almost evenly distributed across industry and academia.
2023
pdf
bib
abs
Speech Translation with Style: AppTek’s Submissions to the IWSLT Subtitling and Formality Tracks in 2023
Parnia Bahar
|
Patrick Wilken
|
Javier Iranzo-Sánchez
|
Mattia Di Gangi
|
Evgeny Matusov
|
Zoltán Tüske
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
AppTek participated in the subtitling and formality tracks of the IWSLT 2023 evaluation. This paper describes the details of our subtitling pipeline - speech segmentation, speech recognition, punctuation prediction and inverse text normalization, text machine translation and direct speech-to-text translation, intelligent line segmentation - and how we make use of the provided subtitling-specific data in training and fine-tuning. The evaluation results show that our final submissions are competitive, in particular outperforming the submissions by other participants by 5% absolute as measured by the SubER subtitle quality metric. For the formality track, we participate with our En-Ru and En-Pt production models, which support formality control via prefix tokens. Except for informal Portuguese, we achieve near perfect formality level accuracy while at the same time offering high general translation quality.
2022
pdf
bib
abs
Automatic Video Dubbing at AppTek
Mattia Di Gangi
|
Nick Rossenbach
|
Alejandro Pérez
|
Parnia Bahar
|
Eugen Beck
|
Patrick Wilken
|
Evgeny Matusov
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
Video dubbing is the activity of revoicing a video while offering a viewing experience equivalent to the original video. The revoicing usually comes with a changed script, mostly in a different language, and the revoicing should reproduce the original emotions, coherent with the body language, and lip synchronized. In this project, we aim to build an AD system in three phases: (1) voice-over; (2) emotional voice-over; (3) full dubbing, while enhancing the system with human-in-the-loop capabilities for a higher quality.
pdf
bib
abs
SubER - A Metric for Automatic Evaluation of Subtitle Quality
Patrick Wilken
|
Panayota Georgakopoulou
|
Evgeny Matusov
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
This paper addresses the problem of evaluating the quality of automatically generated subtitles, which includes not only the quality of the machine-transcribed or translated speech, but also the quality of line segmentation and subtitle timing. We propose SubER - a single novel metric based on edit distance with shifts that takes all of these subtitle properties into account. We compare it to existing metrics for evaluating transcription, translation, and subtitle quality. A careful human evaluation in a post-editing scenario shows that the new metric has a high correlation with the post-editing effort and direct human assessment scores, outperforming baseline metrics considering only the subtitle text, such as WER and BLEU, and existing methods to integrate segmentation and timing features.
pdf
bib
abs
AppTek’s Submission to the IWSLT 2022 Isometric Spoken Language Translation Task
Patrick Wilken
|
Evgeny Matusov
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
To participate in the Isometric Spoken Language Translation Task of the IWSLT 2022 evaluation, constrained condition, AppTek developed neural Transformer-based systems for English-to-German with various mechanisms of length control, ranging from source-side and target-side pseudo-tokens to encoding of remaining length in characters that replaces positional encoding. We further increased translation length compliance by sentence-level selection of length-compliant hypotheses from different system variants, as well as rescoring of N-best candidates from a single system. Length-compliant back-translated and forward-translated synthetic data, as well as other parallel data variants derived from the original MuST-C training corpus were important for a good quality/desired length trade-off. Our experimental results show that length compliance levels above 90% can be reached while minimizing losses in MT quality as measured in BERT and BLEU scores.
2021
pdf
bib
abs
Without Further Ado: Direct and Simultaneous Speech Translation by AppTek in 2021
Parnia Bahar
|
Patrick Wilken
|
Mattia A. Di Gangi
|
Evgeny Matusov
Proceedings of the 18th International Conference on Spoken Language Translation (IWSLT 2021)
This paper describes the offline and simultaneous speech translation systems developed at AppTek for IWSLT 2021. Our offline ST submission includes the direct end-to-end system and the so-called posterior tight integrated model, which is akin to the cascade system but is trained in an end-to-end fashion, where all the cascaded modules are end-to-end models themselves. For simultaneous ST, we combine hybrid automatic speech recognition with a machine translation approach whose translation policy decisions are learned from statistical word alignments. Compared to last year, we improve general quality and provide a wider range of quality/latency trade-offs, both due to a data augmentation method making the MT model robust to varying chunk sizes. Finally, we present a method for ASR output segmentation into sentences that introduces a minimal additional delay.
2020
pdf
bib
Flexible Customization of a Single Neural Machine Translation System with Multi-dimensional Metadata Inputs
Evgeny Matusov
|
Patrick Wilken
|
Christian Herold
Proceedings of the 14th Conference of the Association for Machine Translation in the Americas (Volume 2: User Track)
pdf
bib
abs
Start-Before-End and End-to-End: Neural Speech Translation by AppTek and RWTH Aachen University
Parnia Bahar
|
Patrick Wilken
|
Tamer Alkhouli
|
Andreas Guta
|
Pavel Golik
|
Evgeny Matusov
|
Christian Herold
Proceedings of the 17th International Conference on Spoken Language Translation
AppTek and RWTH Aachen University team together to participate in the offline and simultaneous speech translation tracks of IWSLT 2020. For the offline task, we create both cascaded and end-to-end speech translation systems, paying attention to careful data selection and weighting. In the cascaded approach, we combine high-quality hybrid automatic speech recognition (ASR) with the Transformer-based neural machine translation (NMT). Our end-to-end direct speech translation systems benefit from pretraining of adapted encoder and decoder components, as well as synthetic data and fine-tuning and thus are able to compete with cascaded systems in terms of MT quality. For simultaneous translation, we utilize a novel architecture that makes dynamic decisions, learned from parallel data, to determine when to continue feeding on input or generate output words. Experiments with speech and text input show that even at low latency this architecture leads to superior translation results.
pdf
bib
abs
Neural Simultaneous Speech Translation Using Alignment-Based Chunking
Patrick Wilken
|
Tamer Alkhouli
|
Evgeny Matusov
|
Pavel Golik
Proceedings of the 17th International Conference on Spoken Language Translation
In simultaneous machine translation, the objective is to determine when to produce a partial translation given a continuous stream of source words, with a trade-off between latency and quality. We propose a neural machine translation (NMT) model that makes dynamic decisions when to continue feeding on input or generate output words. The model is composed of two main components: one to dynamically decide on ending a source chunk, and another that translates the consumed chunk. We train the components jointly and in a manner consistent with the inference conditions. To generate chunked training data, we propose a method that utilizes word alignment while also preserving enough context. We compare models with bidirectional and unidirectional encoders of different depths, both on real speech and text input. Our results on the IWSLT 2020 English-to-German task outperform a wait-k baseline by 2.6 to 3.7% BLEU absolute.
2019
pdf
bib
abs
Customizing Neural Machine Translation for Subtitling
Evgeny Matusov
|
Patrick Wilken
|
Yota Georgakopoulou
Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers)
In this work, we customized a neural machine translation system for translation of subtitles in the domain of entertainment. The neural translation model was adapted to the subtitling content and style and extended by a simple, yet effective technique for utilizing inter-sentence context for short sentences such as dialog turns. The main contribution of the paper is a novel subtitle segmentation algorithm that predicts the end of a subtitle line given the previous word-level context using a recurrent neural network learned from human segmentation decisions. This model is combined with subtitle length and duration constraints established in the subtitling industry. We conducted a thorough human evaluation with two post-editors (English-to-Spanish translation of a documentary and a sitcom). It showed a notable productivity increase of up to 37% as compared to translating from scratch and significant reductions in human translation edit rate in comparison with the post-editing of the baseline non-adapted system without a learned segmentation model.
2018
pdf
bib
abs
Neural Speech Translation at AppTek
Evgeny Matusov
|
Patrick Wilken
|
Parnia Bahar
|
Julian Schamper
|
Pavel Golik
|
Albert Zeyer
|
Joan Albert Silvestre-Cerda
|
Adrià Martínez-Villaronga
|
Hendrik Pesch
|
Jan-Thorsten Peter
Proceedings of the 15th International Conference on Spoken Language Translation
This work describes AppTek’s speech translation pipeline that includes strong state-of-the-art automatic speech recognition (ASR) and neural machine translation (NMT) components. We show how these components can be tightly coupled by encoding ASR confusion networks, as well as ASR-like noise adaptation, vocabulary normalization, and implicit punctuation prediction during translation. In another experimental setup, we propose a direct speech translation approach that can be scaled to translation tasks with large amounts of text-only parallel training data but a limited number of hours of recorded and human-translated speech.
2017
pdf
bib
Neural and Statistical Methods for Leveraging Meta-information in Machine Translation
Shahram Khadivi
|
Patrick Wilken
|
Leonard Dahlmann
|
Evgeny Matusov
Proceedings of Machine Translation Summit XVI: Research Track