Evgeniy Shin
2013
The 2013 KIT IWSLT speech-to-text systems for German and English
Kevin Kilgour
|
Christian Mohr
|
Michael Heck
|
Quoc Bao Nguyen
|
Van Huy Nguyen
|
Evgeniy Shin
|
Igor Tseyzer
|
Jonas Gehring
|
Markus Müller
|
Matthias Sperber
|
Sebastian Stüker
|
Alex Waibel
Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper describes our English Speech-to-Text (STT) systems for the 2013 IWSLT TED ASR track. The systems consist of multiple subsystems that are combinations of different front-ends, e.g. MVDR-MFCC based and lMel based ones, GMM and NN acoustic models and different phone sets. The outputs of the subsystems are combined via confusion network combination. Decoding is done in two stages, where the systems of the second stage are adapted in an unsupervised manner on the combination of the first stage outputs using VTLN, MLLR, and cMLLR.
Maximum entropy language modeling for Russian ASR
Evgeniy Shin
|
Sebastian Stüker
|
Kevin Kilgour
|
Christian Fügen
|
Alex Waibel
Proceedings of the 10th International Workshop on Spoken Language Translation: Papers
Russian is a challenging language for automatic speech recognition systems due to its rich morphology. This rich morphology stems from Russian’s highly inflectional nature and the frequent use of preand suffixes. Also, Russian has a very free word order, changes in which are used to reflect connotations of the sentences. Dealing with these phenomena is rather difficult for traditional n-gram models. We therefore investigate in this paper the use of a maximum entropy language model for Russian whose features are specifically designed to deal with the inflections in Russian, as well as the loose word order. We combine this with a subword based language model in order to alleviate the problem of large vocabulary sizes necessary for dealing with highly inflecting languages. Applying the maximum entropy language model during re-scoring improves the word error rate of our recognition system by 1.2% absolute, while the use of the sub-word based language model reduces the vocabulary size from 120k to 40k and the OOV rate from 4.8% to 2.1%.
Search
Co-authors
- Kevin Kilgour 2
- Sebastian Stüker 2
- Alex Waibel 2
- Christian Mohr 1
- Michael Heck 1
- show all...