2020
pdf
bib
abs
FINDINGS OF THE IWSLT 2020 EVALUATION CAMPAIGN
Ebrahim Ansari
|
Amittai Axelrod
|
Nguyen Bach
|
Ondřej Bojar
|
Roldano Cattoni
|
Fahim Dalvi
|
Nadir Durrani
|
Marcello Federico
|
Christian Federmann
|
Jiatao Gu
|
Fei Huang
|
Kevin Knight
|
Xutai Ma
|
Ajay Nagesh
|
Matteo Negri
|
Jan Niehues
|
Juan Pino
|
Elizabeth Salesky
|
Xing Shi
|
Sebastian Stüker
|
Marco Turchi
|
Alexander Waibel
|
Changhan Wang
Proceedings of the 17th International Conference on Spoken Language Translation
The evaluation campaign of the International Conference on Spoken Language Translation (IWSLT 2020) featured this year six challenge tracks: (i) Simultaneous speech translation, (ii) Video speech translation, (iii) Offline speech translation, (iv) Conversational speech translation, (v) Open domain translation, and (vi) Non-native speech translation. A total of teams participated in at least one of the tracks. This paper introduces each track’s goal, data and evaluation metrics, and reports the results of the received submissions.
pdf
bib
abs
DiDi Labs’ End-to-end System for the IWSLT 2020 Offline Speech TranslationTask
Arkady Arkhangorodsky
|
Yiqi Huang
|
Amittai Axelrod
Proceedings of the 17th International Conference on Spoken Language Translation
This paper describes the system that was submitted by DiDi Labs to the offline speech translation task for IWSLT 2020. We trained an end-to-end system that translates audio from English TED talks to German text, without producing intermediate English text. We use the S-Transformer architecture and train using the MuSTC dataset. We also describe several additional experiments that were attempted, but did not yield improved results.
2019
pdf
bib
Proceedings of the 2019 Workshop on Widening NLP
Amittai Axelrod
|
Diyi Yang
|
Rossana Cunha
|
Samira Shaikh
|
Zeerak Waseem
Proceedings of the 2019 Workshop on Widening NLP
pdf
bib
abs
Dual Monolingual Cross-Entropy Delta Filtering of Noisy Parallel Data
Amittai Axelrod
|
Anish Kumar
|
Steve Sloto
Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
We introduce a purely monolingual approach to filtering for parallel data from a noisy corpus in a low-resource scenario. Our work is inspired by Junczysdowmunt:2018, but we relax the requirements to allow for cases where no parallel data is available. Our primary contribution is a dual monolingual cross-entropy delta criterion modified from Cynical data selection Axelrod:2017, and is competitive (within 1.8 BLEU) with the best bilingual filtering method when used to train SMT systems. Our approach is featherweight, and runs end-to-end on a standard laptop in three hours.
2017
pdf
bib
abs
Data Selection with Cluster-Based Language Difference Models and Cynical Selection
Lucía Santamaría
|
Amittai Axelrod
Proceedings of the 14th International Conference on Spoken Language Translation
We present and apply two methods for addressing the problem of selecting relevant training data out of a general pool for use in tasks such as machine translation. Building on existing work on class-based language difference models [1], we first introduce a cluster-based method that uses Brown clusters to condense the vocabulary of the corpora. Secondly, we implement the cynical data selection method [2], which incrementally constructs a training corpus to efficiently model the task corpus. Both the cluster-based and the cynical data selection approaches are used for the first time within a machine translation system, and we perform a head-to-head comparison. Our intrinsic evaluations show that both new methods outperform the standard Moore-Lewis approach (cross-entropy difference), in terms of better perplexity and OOV rates on in-domain data. The cynical approach converges much quicker, covering nearly all of the in-domain vocabulary with 84% less data than the other methods. Furthermore, the new approaches can be used to select machine translation training data for training better systems. Our results confirm that class-based selection using Brown clusters is a viable alternative to POS-based class-based methods, and removes the reliance on a part-of-speech tagger. Additionally, we are able to validate the recently proposed cynical data selection method, showing that its performance in SMT models surpasses that of traditional cross-entropy difference methods and more closely matches the sentence length of the task corpus.
2015
pdf
bib
Data Selection With Fewer Words
Amittai Axelrod
|
Philip Resnik
|
Xiaodong He
|
Mari Ostendorf
Proceedings of the Tenth Workshop on Statistical Machine Translation
pdf
bib
The UMD machine translation systems at IWSLT 2015
Amittai Axelrod
|
Marine Carpuat
Proceedings of the 12th International Workshop on Spoken Language Translation: Evaluation Campaign
pdf
bib
Class-based N-gram language difference models for data selection
Amittai Axelrod
|
Yogarshi Vyas
|
Marianna Martindale
|
Marine Carpuat
Proceedings of the 12th International Workshop on Spoken Language Translation: Papers
2012
pdf
bib
abs
Applications of data selection via cross-entropy difference for real-world statistical machine translation
Amittai Axelrod
|
QingJun Li
|
William D. Lewis
Proceedings of the 9th International Workshop on Spoken Language Translation: Papers
We broaden the application of data selection methods for domain adaptation to a larger number of languages, data, and decoders than shown in previous work, and explore comparable applications for both monolingual and bilingual cross-entropy difference methods. We compare domain adapted systems against very large general-purpose systems for the same languages, and do so without a bias to a particular direction. We present results against real-world generalpurpose systems tuned on domain-specific data, which are substantially harder to beat than standard research baseline systems. We show better performance for nearly all domain adapted systems, despite the fact that the domainadapted systems are trained on a fraction of the content of their general domain counterparts. The high performance of these methods suggest applicability to a wide variety of contexts, particularly in scenarios where only small supplies of unambiguously domain-specific data are available, yet it is believed that additional similar data is included in larger heterogenous-content general-domain corpora.
2011
pdf
bib
abs
The MSR system for IWSLT 2011 evaluation
Xiaodong He
|
Amittai Axelrod
|
Li Deng
|
Alex Acero
|
Mei-Yuh Hwang
|
Alisa Nguyen
|
Andrew Wang
|
Xiahui Huang
Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper describes the Microsoft Research (MSR) system for the evaluation campaign of the 2011 international workshop on spoken language translation. The evaluation task is to translate TED talks (www.ted.com). This task presents two unique challenges: First, the underlying topic switches sharply from talk to talk. Therefore, the translation system needs to adapt to the current topic quickly and dynamically. Second, only a very small amount of relevant parallel data (transcripts of TED talks) is available. Therefore, it is necessary to perform accurate translation model estimation with limited data. In the preparation for the evaluation, we developed two new methods to attack these problems. Specifically, we developed an unsupervised topic modeling based adaption method for machine translation models. We also developed a discriminative training method to estimate parameters in the generative components of the translation models with limited data. Experimental results show that both methods improve the translation quality. Among all the submissions, ours achieves the best BLEU score in the machine translation Chinese-to-English track (MT_CE) of the IWSLT 2011 evaluation that we participated.
pdf
bib
Domain Adaptation via Pseudo In-Domain Data Selection
Amittai Axelrod
|
Xiaodong He
|
Jianfeng Gao
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing
2010
pdf
bib
The MSRA machine translation system for IWSLT 2010
Chi-Ho Li
|
Nan Duan
|
Yinggong Zhao
|
Shujie Liu
|
Lei Cui
|
Mei-yuh Hwang
|
Amittai Axelrod
|
Jianfeng Gao
|
Yaodong Zhang
|
Li Deng
Proceedings of the 7th International Workshop on Spoken Language Translation: Evaluation Campaign
2009
pdf
bib
abs
The University of Washington machine translation system for IWSLT 2009
Mei Yang
|
Amittai Axelrod
|
Kevin Duh
|
Katrin Kirchhoff
Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper describes the University of Washington’s system for the 2009 International Workshop on Spoken Language Translation (IWSLT) evaluation campaign. Two systems were developed, one each for the BTEC Chinese-to-English and Arabic-to-English tracks. We describe experiments with different preprocessing and alignment combination schemes. Our main focus this year was on exploring a novel semi-supervised approach to N-best list reranking; however, this method yielded inconclusive results.
2008
pdf
bib
The University of Washington Machine Translation System for ACL WMT 2008
Amittai Axelrod
|
Mei Yang
|
Kevin Duh
|
Katrin Kirchhoff
Proceedings of the Third Workshop on Statistical Machine Translation
2005
pdf
bib
Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation
Philipp Koehn
|
Amittai Axelrod
|
Alexandra Birch Mayne
|
Chris Callison-Burch
|
Miles Osborne
|
David Talbot
Proceedings of the Second International Workshop on Spoken Language Translation
2003
pdf
bib
On building a high performance gazetteer database
Amittai Axelrod
Proceedings of the HLT-NAACL 2003 Workshop on Analysis of Geographic References