Simon Wiesler
2013
The RWTH Aachen German and English LVCSR systems for IWSLT-2013
M. Ali Basha Shaik
|
Zoltan Tüske
|
Simon Wiesler
|
Markus Nußbaum-Thom
|
Stephan Peitz
|
Ralf Schlüter
|
Hermann Ney
Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign
In this paper, German and English large vocabulary continuous speech recognition (LVCSR) systems developed by the RWTH Aachen University for the IWSLT-2013 evaluation campaign are presented. Good improvements are obtained with state-of-the-art monolingual and multilingual bottleneck features. In addition, an open vocabulary approach using morphemic sub-lexical units is investigated along with the language model adaptation for the German LVCSR. For both the languages, competitive WERs are achieved using system combination.
2012
Spoken language translation using automatically transcribed text in training
Stephan Peitz
|
Simon Wiesler
|
Markus Nußbaum-Thom
|
Hermann Ney
Proceedings of the 9th International Workshop on Spoken Language Translation: Papers
In spoken language translation a machine translation system takes speech as input and translates it into another language. A standard machine translation system is trained on written language data and expects written language as input. In this paper we propose an approach to close the gap between the output of automatic speech recognition and the input of machine translation by training the translation system on automatically transcribed speech. In our experiments we show improvements of up to 0.9 BLEU points on the IWSLT 2012 English-to-French speech translation task.
2011
Lexicon models for hierarchical phrase-based machine translation
Matthias Huck
|
Saab Mansour
|
Simon Wiesler
|
Hermann Ney
Proceedings of the 8th International Workshop on Spoken Language Translation: Papers
In this paper, we investigate lexicon models for hierarchical phrase-based statistical machine translation. We study five types of lexicon models: a model which is extracted from word-aligned training data and—given the word alignment matrix—relies on pure relative frequencies [1]; the IBM model 1 lexicon [2]; a regularized version of IBM model 1; a triplet lexicon model variant [3]; and a discriminatively trained word lexicon model [4]. We explore sourceto-target models with phrase-level as well as sentence-level scoring and target-to-source models with scoring on phrase level only. For the first two types of lexicon models, we compare several scoring variants. All models are used during search, i.e. they are incorporated directly into the log-linear model combination of the decoder. Phrase table smoothing with triplet lexicon models and with discriminative word lexicons are novel contributions. We also propose a new regularization technique for IBM model 1 by means of the Kullback-Leibler divergence with the empirical unigram distribution as regularization term. Experiments are carried out on the large-scale NIST Chinese→English translation task and on the English→French and Arabic→English IWSLT TED tasks. For Chinese→English and English→French, we obtain the best results by using the discriminative word lexicon to smooth our phrase tables.
Search
Co-authors
- Hermann Ney 3
- Stephan Peitz 2
- Markus Nußbaum-Thom 2
- Matthias Huck 1
- Saab Mansour 1
- show all...