2017
pdf
bib
The AFRL-MITLL WMT17 Systems: Old, New, Borrowed, BLEU
Jeremy Gwinnup
|
Timothy Anderson
|
Grant Erdmann
|
Katherine Young
|
Michaeel Kazi
|
Elizabeth Salesky
|
Brian Thompson
|
Jonathan Taylor
Proceedings of the Second Conference on Machine Translation
pdf
bib
abs
Implicitly-Defined Neural Networks for Sequence Labeling
Michaeel Kazi
|
Brian Thompson
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
In this work, we propose a novel, implicitly-defined neural network architecture and describe a method to compute its components. The proposed architecture forgoes the causality assumption used to formulate recurrent neural networks and instead couples the hidden states of the network, allowing improvement on problems with complex, long-distance dependencies. Initial experiments demonstrate the new architecture outperforms both the Stanford Parser and baseline bidirectional networks on the Penn Treebank Part-of-Speech tagging task and a baseline bidirectional network on an additional artificial random biased walk task.
2016
pdf
bib
The AFRL-MITLL WMT16 News-Translation Task Systems
Jeremy Gwinnup
|
Tim Anderson
|
Grant Erdmann
|
Katherine Young
|
Michaeel Kazi
|
Elizabeth Salesky
|
Brian Thompson
Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers
pdf
bib
abs
The MITLL-AFRL IWSLT 2016 Systems
Michaeel Kazi
|
Elizabeth Salesky
|
Brian Thompson
|
Jonathan Taylor
|
Jeremy Gwinnup
|
Timothy Anderson
|
Grant Erdmann
|
Eric Hansen
|
Brian Ore
|
Katherine Young
|
Michael Hutt
Proceedings of the 13th International Conference on Spoken Language Translation
This report summarizes the MITLL-AFRL MT and ASR systems and the experiments run during the 2016 IWSLT evaluation campaign. Building on lessons learned from previous years’ results, we refine our ASR systems and examine the explosion of neural machine translation systems and techniques developed in the past year. We experiment with a variety of phrase-based, hierarchical and neural-network approaches in machine translation and utilize system combination to create a composite system with the best characteristics of all attempted MT approaches.
2015
pdf
bib
The AFRL-MITLL WMT15 System: There’s More than One Way to Decode It!
Jeremy Gwinnup
|
Tim Anderson
|
Grant Erdmann
|
Katherine Young
|
Christina May
|
Michaeel Kazi
|
Elizabeth Salesky
|
Brian Thompson
Proceedings of the Tenth Workshop on Statistical Machine Translation
pdf
bib
The MITLL-AFRL IWSLT 2015 MT system
Michaeel Kazi
|
Brian Thompson
|
Elizabeth Salesky
|
Timothy Anderson
|
Grant Erdmann
|
Eric Hansen
|
Brian Ore
|
Katherine Young
|
Jeremy Gwinnup
|
Michael Hutt
|
Christina May
Proceedings of the 12th International Workshop on Spoken Language Translation: Evaluation Campaign
2014
pdf
bib
abs
The MITLL-AFRL IWSLT 2014 MT system
Michaeel Kazi
|
Elizabeth Salesky
|
Brian Thompson
|
Jessica Ray
|
Michael Coury
|
Tim Anderson
|
Grant Erdmann
|
Jeremy Gwinnup
|
Katherine Young
|
Brian Ore
|
Michael Hutt
Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign
This report summarizes the MITLL-AFRL MT and ASR systems and the experiments run using them during the 2014 IWSLT evaluation campaign. Our MT system is much improved over last year, owing to integration of techniques such as PRO and DREM optimization, factored language models, neural network joint model rescoring, multiple phrase tables, and development set creation. We focused our eforts this year on the tasks of translating from Arabic, Russian, Chinese, and Farsi into English, as well as translating from English to French. ASR performance also improved, partly due to increased eforts with deep neural networks for hybrid and tandem systems. Work focused on both the English and Italian ASR tasks.
2013
pdf
bib
abs
The MIT-LL/AFRL IWSLT-2013 MT system
Michaeel Kazi
|
Michael Coury
|
Elizabeth Salesky
|
Jessica Ray
|
Wade Shen
|
Terry Gleason
|
Tim Anderson
|
Grant Erdmann
|
Lane Schwartz
|
Brian Ore
|
Raymond Slyh
|
Jeremy Gwinnup
|
Katherine Young
|
Michael Hutt
Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper describes the MIT-LL/AFRL statistical MT system and the improvements that were developed during the IWSLT 2013 evaluation campaign [1]. As part of these efforts, we experimented with a number of extensions to the standard phrase-based model that improve performance on the Russian to English, Chinese to English, Arabic to English, and English to French TED-talk translation task. We also applied our existing ASR system to the TED-talk lecture ASR task. We discuss the architecture of the MIT-LL/AFRL MT system, improvements over our 2012 system, and experiments we ran during the IWSLT-2013 evaluation. Specifically, we focus on 1) cross-entropy filtering of MT training data, and 2) improved optimization techniques, 3) language modeling, and 4) approximation of out-of-vocabulary words.