2023
pdf
bib
abs
Enhancing Video Translation Context with Object Labels
Jeremy Gwinnup
|
Tim Anderson
|
Brian Ore
|
Eric Hansen
|
Kevin Duh
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
We present a simple yet efficient method to enhance the quality of machine translation models trained on multimodal corpora by augmenting the training text with labels of detected objects in the corresponding video segments. We then test the effects of label augmentation in both baseline and two automatic speech recognition (ASR) conditions. In contrast with multimodal techniques that merge visual and textual features, our modular method is easy to implement and the results are more interpretable. Comparisons are made with Transformer translation architectures trained with baseline and augmented labels, showing improvements of up to +1.0 BLEU on the How2 dataset.
2020
pdf
bib
abs
The AFRL IWSLT 2020 Systems: Work-From-Home Edition
Brian Ore
|
Eric Hansen
|
Tim Anderson
|
Jeremy Gwinnup
Proceedings of the 17th International Conference on Spoken Language Translation
This report summarizes the Air Force Research Laboratory (AFRL) submission to the offline spoken language translation (SLT) task as part of the IWSLT 2020 evaluation campaign. As in previous years, we chose to adopt the cascade approach of using separate systems to perform speech activity detection, automatic speech recognition, sentence segmentation, and machine translation. All systems were neural based, including a fully-connected neural network for speech activity detection, a Kaldi factorized time delay neural network with recurrent neural network (RNN) language model rescoring for speech recognition, a bidirectional RNN with attention mechanism for sentence segmentation, and transformer networks trained with OpenNMT and Marian for machine translation. Our primary submission yielded BLEU scores of 21.28 on tst2019 and 23.33 on tst2020.
2018
pdf
bib
abs
The AFRL IWSLT 2018 Systems: What Worked, What Didn’t
Brian Ore
|
Eric Hansen
|
Katherine Young
|
Grant Erdmann
|
Jeremy Gwinnup
Proceedings of the 15th International Conference on Spoken Language Translation
This report summarizes the Air Force Research Laboratory (AFRL) machine translation (MT) and automatic speech recognition (ASR) systems submitted to the spoken language translation (SLT) and low-resource MT tasks as part of the IWSLT18 evaluation campaign.
2016
pdf
bib
abs
The MITLL-AFRL IWSLT 2016 Systems
Michaeel Kazi
|
Elizabeth Salesky
|
Brian Thompson
|
Jonathan Taylor
|
Jeremy Gwinnup
|
Timothy Anderson
|
Grant Erdmann
|
Eric Hansen
|
Brian Ore
|
Katherine Young
|
Michael Hutt
Proceedings of the 13th International Conference on Spoken Language Translation
This report summarizes the MITLL-AFRL MT and ASR systems and the experiments run during the 2016 IWSLT evaluation campaign. Building on lessons learned from previous years’ results, we refine our ASR systems and examine the explosion of neural machine translation systems and techniques developed in the past year. We experiment with a variety of phrase-based, hierarchical and neural-network approaches in machine translation and utilize system combination to create a composite system with the best characteristics of all attempted MT approaches.
2015
pdf
bib
The MITLL-AFRL IWSLT 2015 MT system
Michaeel Kazi
|
Brian Thompson
|
Elizabeth Salesky
|
Timothy Anderson
|
Grant Erdmann
|
Eric Hansen
|
Brian Ore
|
Katherine Young
|
Jeremy Gwinnup
|
Michael Hutt
|
Christina May
Proceedings of the 12th International Workshop on Spoken Language Translation: Evaluation Campaign
2012
pdf
bib
abs
The MIT-LL/AFRL IWSLT 2012 MT system
Jennifer Drexler
|
Wade Shen
|
Tim Anderson
|
Raymond Slyh
|
Brian Ore
|
Eric Hansen
|
Terry Gleason
Proceedings of the 9th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper describes the MIT-LL/AFRL statistical MT system and the improvements that were developed during the IWSLT 2012 evaluation campaign. As part of these efforts, we experimented with a number of extensions to the standard phrase-based model that improve performance on the Arabic to English and English to French TED-talk translation task. We also applied our existing ASR system to the TED-talk lecture ASR task, and combined our ASR and MT systems for the TED-talk SLT task. We discuss the architecture of the MIT-LL/AFRL MT system, improvements over our 2011 system, and experiments we ran during the IWSLT-2012 evaluation. Specifically, we focus on 1) cross-domain translation using MAP adaptation, 2) cross-entropy filtering of MT training data, and 3) improved Arabic morphology for MT preprocessing.
2011
pdf
bib
abs
The MIT-LL/AFRL IWSLT-2011 MT system
A. Ryan Aminzadeh
|
Tim Anderson
|
Ray Slyh
|
Brian Ore
|
Eric Hansen
|
Wade Shen
|
Jennifer Drexler
|
Terry Gleason
Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper describes the MIT-LL/AFRL statistical MT system and the improvements that were developed during the IWSLT 2011 evaluation campaign. As part of these efforts, we experimented with a number of extensions to the standard phrase-based model that improve performance on the Arabic to English and English to French TED-talk translation tasks. We also applied our existing ASR system to the TED-talk lecture ASR task. We discuss the architecture of the MIT-LL/AFRL MT system, improvements over our 2010 system, and experiments we ran during the IWSLT-2011 evaluation. Specifically, we focus on 1) speech recognition for lecture-like data, 2) cross-domain translation using MAP adaptation, and 3) improved Arabic morphology for MT preprocessing.