Grant Erdmann


2024

pdf bib
Assessing the Role of Imagery in Multimodal Machine Translation
Nicholas Kashani Motlagh | Jim Davis | Jeremy Gwinnup | Grant Erdmann | Tim Anderson
Proceedings of the Ninth Conference on Machine Translation

In Multimodal Machine Translation (MMT), the use of visual data has shown only marginal improvements compared to text-only models. Previously, the CoMMuTE dataset and associated metric were proposed to score models on tasks where the imagery is necessary to disambiguate between two possible translations for each ambiguous source sentence. In this work, we introduce new metrics within the CoMMuTE domain to provide deeper insights into image-aware translation models. Our proposed metrics differ from the previous CoMMuTE scoring method by 1) assessing the impact of multiple images on individual translations and 2) evaluating a model’s ability to jointly select each translation for each image context. Our results challenge the conventional views of poor visual comprehension capabilities of MMT models and show that models can indeed meaningfully interpret visual information, though they may not leverage it sufficiently in the final decision.

2021

pdf bib
Tune in: The AFRL WMT21 News-Translation Systems
Grant Erdmann | Jeremy Gwinnup | Tim Anderson
Proceedings of the Sixth Conference on Machine Translation

This paper describes the Air Force Research Laboratory (AFRL) machine translation sys- tems and the improvements that were developed during the WMT21 evaluation campaign. This year, we explore various methods of adapting our baseline models from WMT20 and again measure improvements in performance on the Russian–English language pair.

2019

pdf bib
The AFRL WMT19 Systems: Old Favorites and New Tricks
Jeremy Gwinnup | Grant Erdmann | Tim Anderson
Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)

This paper describes the Air Force Research Laboratory (AFRL) machine translation systems and the improvements that were developed during the WMT19 evaluation campaign. This year, we refine our approach to training popular neural machine translation toolkits, experiment with a new domain adaptation technique and again measure improvements in performance on the Russian–English language pair.

pdf bib
Quality and Coverage: The AFRL Submission to the WMT19 Parallel Corpus Filtering for Low-Resource Conditions Task
Grant Erdmann | Jeremy Gwinnup
Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)

The WMT19 Parallel Corpus Filtering For Low-Resource Conditions Task aims to test various methods of filtering a noisy parallel corpora, to make them useful for training machine translation systems. This year the noisy corpora are the relatively low-resource language pairs of Nepali-English and Sinhala-English. This papers describes the Air Force Research Laboratory (AFRL) submissions, including preprocessing methods and scoring metrics. Numerical results indicate a benefit over baseline and the relative benefits of different options.

2018

pdf bib
The AFRL IWSLT 2018 Systems: What Worked, What Didn’t
Brian Ore | Eric Hansen | Katherine Young | Grant Erdmann | Jeremy Gwinnup
Proceedings of the 15th International Conference on Spoken Language Translation

This report summarizes the Air Force Research Laboratory (AFRL) machine translation (MT) and automatic speech recognition (ASR) systems submitted to the spoken language translation (SLT) and low-resource MT tasks as part of the IWSLT18 evaluation campaign.

pdf bib
The AFRL WMT18 Systems: Ensembling, Continuation and Combination
Jeremy Gwinnup | Tim Anderson | Grant Erdmann | Katherine Young
Proceedings of the Third Conference on Machine Translation: Shared Task Papers

This paper describes the Air Force Research Laboratory (AFRL) machine translation systems and the improvements that were developed during the WMT18 evaluation campaign. This year, we examined the developments and additions to popular neural machine translation toolkits and measure improvements in performance on the Russian–English language pair.

pdf bib
The AFRL-Ohio State WMT18 Multimodal System: Combining Visual with Traditional
Jeremy Gwinnup | Joshua Sandvick | Michael Hutt | Grant Erdmann | John Duselis | James Davis
Proceedings of the Third Conference on Machine Translation: Shared Task Papers

AFRL-Ohio State extends its usage of visual domain-driven machine translation for use as a peer with traditional machine translation systems. As a peer, it is enveloped into a system combination of neural and statistical MT systems to present a composite translation.

pdf bib
Coverage and Cynicism: The AFRL Submission to the WMT 2018 Parallel Corpus Filtering Task
Grant Erdmann | Jeremy Gwinnup
Proceedings of the Third Conference on Machine Translation: Shared Task Papers

The WMT 2018 Parallel Corpus Filtering Task aims to test various methods of filtering a noisy parallel corpus, to make it useful for training machine translation systems. We describe the AFRL submissions, including their preprocessing methods and quality metrics. Numerical results indicate relative benefits of different options and show where our methods are competitive.

2017

pdf bib
The AFRL-MITLL WMT17 Systems: Old, New, Borrowed, BLEU
Jeremy Gwinnup | Timothy Anderson | Grant Erdmann | Katherine Young | Michaeel Kazi | Elizabeth Salesky | Brian Thompson | Jonathan Taylor
Proceedings of the Second Conference on Machine Translation

pdf bib
The AFRL WMT17 Neural Machine Translation Training Task Submission
Grant Erdmann | Katherine Young | Jeremy Gwinnup
Proceedings of the Second Conference on Machine Translation

2016

pdf bib
The AFRL-MITLL WMT16 News-Translation Task Systems
Jeremy Gwinnup | Tim Anderson | Grant Erdmann | Katherine Young | Michaeel Kazi | Elizabeth Salesky | Brian Thompson
Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers

pdf bib
The MITLL-AFRL IWSLT 2016 Systems
Michaeel Kazi | Elizabeth Salesky | Brian Thompson | Jonathan Taylor | Jeremy Gwinnup | Timothy Anderson | Grant Erdmann | Eric Hansen | Brian Ore | Katherine Young | Michael Hutt
Proceedings of the 13th International Conference on Spoken Language Translation

This report summarizes the MITLL-AFRL MT and ASR systems and the experiments run during the 2016 IWSLT evaluation campaign. Building on lessons learned from previous years’ results, we refine our ASR systems and examine the explosion of neural machine translation systems and techniques developed in the past year. We experiment with a variety of phrase-based, hierarchical and neural-network approaches in machine translation and utilize system combination to create a composite system with the best characteristics of all attempted MT approaches.

2015

pdf bib
The AFRL-MITLL WMT15 System: There’s More than One Way to Decode It!
Jeremy Gwinnup | Tim Anderson | Grant Erdmann | Katherine Young | Christina May | Michaeel Kazi | Elizabeth Salesky | Brian Thompson
Proceedings of the Tenth Workshop on Statistical Machine Translation

pdf bib
Drem: The AFRL Submission to the WMT15 Tuning Task
Grant Erdmann | Jeremy Gwinnup
Proceedings of the Tenth Workshop on Statistical Machine Translation

pdf bib
The MITLL-AFRL IWSLT 2015 MT system
Michaeel Kazi | Brian Thompson | Elizabeth Salesky | Timothy Anderson | Grant Erdmann | Eric Hansen | Brian Ore | Katherine Young | Jeremy Gwinnup | Michael Hutt | Christina May
Proceedings of the 12th International Workshop on Spoken Language Translation: Evaluation Campaign

2014

pdf bib
The MITLL-AFRL IWSLT 2014 MT system
Michaeel Kazi | Elizabeth Salesky | Brian Thompson | Jessica Ray | Michael Coury | Tim Anderson | Grant Erdmann | Jeremy Gwinnup | Katherine Young | Brian Ore | Michael Hutt
Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign

This report summarizes the MITLL-AFRL MT and ASR systems and the experiments run using them during the 2014 IWSLT evaluation campaign. Our MT system is much improved over last year, owing to integration of techniques such as PRO and DREM optimization, factored language models, neural network joint model rescoring, multiple phrase tables, and development set creation. We focused our eforts this year on the tasks of translating from Arabic, Russian, Chinese, and Farsi into English, as well as translating from English to French. ASR performance also improved, partly due to increased eforts with deep neural networks for hybrid and tandem systems. Work focused on both the English and Italian ASR tasks.

2013

pdf bib
The MIT-LL/AFRL IWSLT-2013 MT system
Michaeel Kazi | Michael Coury | Elizabeth Salesky | Jessica Ray | Wade Shen | Terry Gleason | Tim Anderson | Grant Erdmann | Lane Schwartz | Brian Ore | Raymond Slyh | Jeremy Gwinnup | Katherine Young | Michael Hutt
Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign

This paper describes the MIT-LL/AFRL statistical MT system and the improvements that were developed during the IWSLT 2013 evaluation campaign [1]. As part of these efforts, we experimented with a number of extensions to the standard phrase-based model that improve performance on the Russian to English, Chinese to English, Arabic to English, and English to French TED-talk translation task. We also applied our existing ASR system to the TED-talk lecture ASR task. We discuss the architecture of the MIT-LL/AFRL MT system, improvements over our 2012 system, and experiments we ran during the IWSLT-2013 evaluation. Specifically, we focus on 1) cross-entropy filtering of MT training data, and 2) improved optimization techniques, 3) language modeling, and 4) approximation of out-of-vocabulary words.