Sanjeev Khudanpur

Also published as: S. Khudanpur


2023

pdf bib
JHU IWSLT 2023 Dialect Speech Translation System Description
Amir Hussein | Cihan Xiao | Neha Verma | Thomas Thebaud | Matthew Wiesner | Sanjeev Khudanpur
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)

This paper presents JHU’s submissions to the IWSLT 2023 dialectal and low-resource track of Tunisian Arabic to English speech translation. The Tunisian dialect lacks formal orthography and abundant training data, making it challenging to develop effective speech translation (ST) systems. To address these challenges, we explore the integration of large pre-trained machine translation (MT) models, such as mBART and NLLB-200 in both end-to-end (E2E) and cascaded speech translation (ST) systems. We also improve the performance of automatic speech recognition (ASR) through the use of pseudo-labeling data augmentation and channel matching on telephone data. Finally, we combine our E2E and cascaded ST systems with Minimum Bayes-Risk decoding. Our combined system achieves a BLEU score of 21.6 and 19.1 on test2 and test3, respectively.

pdf bib
JHU IWSLT 2023 Multilingual Speech Translation System Description
Henry Li Xinyuan | Neha Verma | Bismarck Bamfo Odoom | Ujvala Pradeep | Matthew Wiesner | Sanjeev Khudanpur
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)

We describe the Johns Hopkins ACL 60-60 Speech Translation systems submitted to the IWSLT 2023 Multilingual track, where we were tasked to translate ACL presentations from English into 10 languages. We developed cascaded speech translation systems for both the constrained and unconstrained subtracks. Our systems make use of pre-trained models as well as domain-specific corpora for this highly technical evaluation-only task. We find that the specific technical domain which ACL presentations fall into presents a unique challenge for both ASR and MT, and we present an error analysis and an ACL-specific corpus we produced to enable further work in this area.

2022

pdf bib
JHU IWSLT 2022 Dialect Speech Translation System Description
Jinyi Yang | Amir Hussein | Matthew Wiesner | Sanjeev Khudanpur
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)

This paper details the Johns Hopkins speech translation (ST) system used in the IWLST2022 dialect speech translation task. Our system uses a cascade of automatic speech recognition (ASR) and machine translation (MT). We use a Conformer model for ASR systems and a Transformer model for machine translation. Surprisingly, we found that while using additional ASR training data resulted in only a negligible change in performance as measured by BLEU or word error rate (WER), aggressive text normalization improved BLEU more significantly. We also describe an approach, similar to back-translation, for improving performance using synthetic dialectal source text produced from source sentences in mismatched dialects.

2021

pdf bib
Learning Feature Weights using Reward Modeling for Denoising Parallel Corpora
Gaurav Kumar | Philipp Koehn | Sanjeev Khudanpur
Proceedings of the Sixth Conference on Machine Translation

Large web-crawled corpora represent an excellent resource for improving the performance of Neural Machine Translation (NMT) systems across several language pairs. However, since these corpora are typically extremely noisy, their use is fairly limited. Current approaches to deal with this problem mainly focus on filtering using heuristics or single features such as language model scores or bi-lingual similarity. This work presents an alternative approach which learns weights for multiple sentence-level features. These feature weights which are optimized directly for the task of improving translation performance, are used to score and filter sentences in the noisy corpora more effectively. We provide results of applying this technique to building NMT systems using the Paracrawl corpus for Estonian-English and show that it beats strong single feature baselines and hand designed combinations. Additionally, we analyze the sensitivity of this method to different types of noise and explore if the learned weights generalize to other language pairs using the Maltese-English Paracrawl corpus.

pdf bib
Learning Curricula for Multilingual Neural Machine Translation Training
Gaurav Kumar | Philipp Koehn | Sanjeev Khudanpur
Proceedings of Machine Translation Summit XVIII: Research Track

Low-resource Multilingual Neural Machine Translation (MNMT) is typically tasked with improving the translation performance on one or more language pairs with the aid of high-resource language pairs. In this paper and we propose two simple search based curricula – orderings of the multilingual training data – which help improve translation performance in conjunction with existing techniques such as fine-tuning. Additionally and we attempt to learn a curriculum for MNMT from scratch jointly with the training of the translation system using contextual multi-arm bandits. We show on the FLORES low-resource translation dataset that these learned curricula can provide better starting points for fine tuning and improve overall performance of the translation system.

2016

pdf bib
New release of Mixer-6: Improved validity for phonetic study of speaker variation and identification
Eleanor Chodroff | Matthew Maciejewski | Jan Trmal | Sanjeev Khudanpur | John Godfrey
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

The Mixer series of speech corpora were collected over several years, principally to support annual NIST evaluations of speaker recognition (SR) technologies. These evaluations focused on conversational speech over a variety of channels and recording conditions. One of the series, Mixer-6, added a new condition, read speech, to support basic scientific research on speaker characteristics, as well as technology evaluation. With read speech it is possible to make relatively precise measurements of phonetic events and features, which can be correlated with the performance of speaker recognition algorithms, or directly used in phonetic analysis of speaker variability. The read speech, as originally recorded, was adequate for large-scale evaluations (e.g., fixed-text speaker ID algorithms) but only marginally suitable for acoustic-phonetic studies. Numerous errors due largely to speaker behavior remained in the corpus, with no record of their locations or rate of occurrence. We undertook the effort to correct this situation with automatic methods supplemented by human listening and annotation. The present paper describes the tools and methods, resulting corrections, and some examples of the kinds of research studies enabled by these enhancements.

2015

pdf bib
A Coarse-Grained Model for Optimal Coupling of ASR and SMT Systems for Speech Translation
Gaurav Kumar | Graeme Blackwood | Jan Trmal | Daniel Povey | Sanjeev Khudanpur
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

2014

pdf bib
Translations of the Callhome Egyptian Arabic corpus for conversational speech translation
Gaurav Kumar | Yuan Cao | Ryan Cotterell | Chris Callison-Burch | Daniel Povey | Sanjeev Khudanpur
Proceedings of the 11th International Workshop on Spoken Language Translation: Papers

Translation of the output of automatic speech recognition (ASR) systems, also known as speech translation, has received a lot of research interest recently. This is especially true for programs such as DARPA BOLT which focus on improving spontaneous human-human conversation across languages. However, this research is hindered by the dearth of datasets developed for this explicit purpose. For Egyptian Arabic-English, in particular, no parallel speechtranscription-translation dataset exists in the same domain. In order to support research in speech translation, we introduce the Callhome Egyptian Arabic-English Speech Translation Corpus. This supplements the existing LDC corpus with four reference translations for each utterance in the transcripts. The result is a three-way parallel dataset of Egyptian Arabic Speech, transcriptions and English translations.

pdf bib
Online Learning in Tensor Space
Yuan Cao | Sanjeev Khudanpur
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Can You Repeat That? Using Word Repetition to Improve Spoken Term Detection
Jonathan Wintrode | Sanjeev Khudanpur
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2013

pdf bib
Improved speech-to-text translation with the Fisher and Callhome Spanish-English speech translation corpus
Matt Post | Gaurav Kumar | Adam Lopez | Damianos Karakos | Chris Callison-Burch | Sanjeev Khudanpur
Proceedings of the 10th International Workshop on Spoken Language Translation: Papers

Research into the translation of the output of automatic speech recognition (ASR) systems is hindered by the dearth of datasets developed for that explicit purpose. For SpanishEnglish translation, in particular, most parallel data available exists only in vastly different domains and registers. In order to support research on cross-lingual speech applications, we introduce the Fisher and Callhome Spanish-English Speech Translation Corpus, supplementing existing LDC audio and transcripts with (a) ASR 1-best, lattice, and oracle output produced by the Kaldi recognition system and (b) English translations obtained on Amazon’s Mechanical Turk. The result is a four-way parallel dataset of Spanish audio, transcriptions, ASR lattices, and English translations of approximately 38 hours of speech, with defined training, development, and held-out test sets. We conduct baseline machine translation experiments using models trained on the provided training data, and validate the dataset by corroborating a number of known results in the field, including the utility of in-domain (information, conversational) training data, increased performance translating lattices (instead of recognizer 1-best output), and the relationship between word error rate and BLEU score.

2012

pdf bib
Sample Selection for Large-scale MT Discriminative Training
Yuan Cao | Sanjeev Khudanpur
Proceedings of the 10th Conference of the Association for Machine Translation in the Americas: Research Papers

Discriminative training for MT usually involves numerous features and requires large-scale training set to reach reliable parameter estimation. Other than using the expensive human-labeled parallel corpora for training, semi-supervised methods have been proposed to generate huge amount of “hallucinated” data which relieves the data sparsity problem. However the large training set contains both good samples which are suitable for training and bad ones harmful to the training. How to select training samples from vast amount of data can greatly affect the training performance. In this paper we propose a method for selecting samples that are most suitable for discriminative training according to a criterion measuring the dataset quality. Our experimental results show that by adding samples to the training set selectively, we are able to exceed the performance of system trained with the same amount of samples selected randomly.

pdf bib
Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT
Bhuvana Ramabhadran | Sanjeev Khudanpur | Ebru Arisoy
Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT

pdf bib
Revisiting the Case for Explicit Syntactic Information in Language Models
Ariya Rastrow | Sanjeev Khudanpur | Mark Dredze
Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT

pdf bib
Fast Syntactic Analysis for Statistical Language Modeling via Substructure Sharing and Uptraining
Ariya Rastrow | Mark Dredze | Sanjeev Khudanpur
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2011

pdf bib
Minimum Imputed-Risk: Unsupervised Discriminative Training for Machine Translation
Zhifei Li | Ziyuan Wang | Jason Eisner | Sanjeev Khudanpur | Brian Roark
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf bib
Efficient Subsampling for Training Complex Language Models
Puyang Xu | Asela Gunawardana | Sanjeev Khudanpur
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

2010

pdf bib
Joshua 2.0: A Toolkit for Parsing-Based Machine Translation with Syntax, Semirings, Discriminative Training and Other Goodies
Zhifei Li | Chris Callison-Burch | Chris Dyer | Juri Ganitkevitch | Ann Irvine | Sanjeev Khudanpur | Lane Schwartz | Wren Thornton | Ziyuan Wang | Jonathan Weese | Omar Zaidan
Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR

pdf bib
A Comparative Study of Word Co-occurrence for Term Clustering in Language Model-based Sentence Retrieval
Saeedeh Momtazi | Sanjeev Khudanpur | Dietrich Klakow
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Unsupervised Discriminative Language Model Training for Machine Translation using Simulated Confusion Sets
Zhifei Li | Ziyuan Wang | Sanjeev Khudanpur | Jason Eisner
Coling 2010: Posters

2009

pdf bib
Joshua: An Open Source Toolkit for Parsing-Based Machine Translation
Zhifei Li | Chris Callison-Burch | Chris Dyer | Sanjeev Khudanpur | Lane Schwartz | Wren Thornton | Jonathan Weese | Omar Zaidan
Proceedings of the Fourth Workshop on Statistical Machine Translation

pdf bib
Efficient Extraction of Oracle-best Translations from Hypergraphs
Zhifei Li | Sanjeev Khudanpur
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers

pdf bib
Variational Decoding for Statistical Machine Translation
Zhifei Li | Jason Eisner | Sanjeev Khudanpur
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

pdf bib
Demonstration of Joshua: An Open Source Toolkit for Parsing-based Machine Translation
Zhifei Li | Chris Callison-Burch | Chris Dyer | Juri Ganitkevitch | Sanjeev Khudanpur | Lane Schwartz | Wren N. G. Thornton | Jonathan Weese | Omar F. Zaidan
Proceedings of the ACL-IJCNLP 2009 Software Demonstrations

2008

pdf bib
Large-scale Discriminative n-gram Language Models for Statistical Machine Translation
Zhifei Li | Sanjeev Khudanpur
Proceedings of the 8th Conference of the Association for Machine Translation in the Americas: Research Papers

We extend discriminative n-gram language modeling techniques originally proposed for automatic speech recognition to a statistical machine translation task. In this context, we propose a novel data selection method that leads to good models using a fraction of the training data. We carry out systematic experiments on several benchmark tests for Chinese to English translation using a hierarchical phrase-based machine translation system, and show that a discriminative language model significantly improves upon a state-of-the-art baseline. The experiments also highlight the benefits of our data selection method.

pdf bib
Machine Translation System Combination using ITG-based Alignments
Damianos Karakos | Jason Eisner | Sanjeev Khudanpur | Markus Dreyer
Proceedings of ACL-08: HLT, Short Papers

pdf bib
Unsupervised Learning of Acoustic Sub-word Units
Balakrishnan Varadarajan | Sanjeev Khudanpur | Emmanuel Dupoux
Proceedings of ACL-08: HLT, Short Papers

pdf bib
A Scalable Decoder for Parsing-Based Machine Translation with Equivalent Language Model State Maintenance
Zhifei Li | Sanjeev Khudanpur
Proceedings of the ACL-08: HLT Second Workshop on Syntax and Structure in Statistical Translation (SSST-2)

2007

pdf bib
Comparing Reordering Constraints for SMT Using Efficient BLEU Oracle Computation
Markus Dreyer | Keith Hall | Sanjeev Khudanpur
Proceedings of SSST, NAACL-HLT 2007 / AMTA Workshop on Syntax and Structure in Statistical Translation

pdf bib
Cross-Instance Tuning of Unsupervised Document Clustering Algorithms
Damianos Karakos | Jason Eisner | Sanjeev Khudanpur | Carey Priebe
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference

2006

pdf bib
Generative Content Models for Structural Analysis of Medical Abstracts
Jimmy Lin | Damianos Karakos | Dina Demner-Fushman | Sanjeev Khudanpur
Proceedings of the HLT-NAACL BioNLP Workshop on Linking Natural Language and Biology

2004

pdf bib
A Smorgasbord of Features for Statistical Machine Translation
Franz Josef Och | Daniel Gildea | Sanjeev Khudanpur | Anoop Sarkar | Kenji Yamada | Alex Fraser | Shankar Kumar | Libin Shen | David Smith | Katherine Eng | Viren Jain | Zhen Jin | Dragomir Radev
Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004

2003

pdf bib
Cross-Lingual Lexical Triggers in Statistical Language Modeling
Woosung Kim | Sanjeev Khudanpur
Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing

pdf bib
Transliteration of Proper Names in Cross-Lingual Information Retrieval
Paola Virga | Sanjeev Khudanpur
Proceedings of the ACL 2003 Workshop on Multilingual and Mixed-language Named Entity Recognition

pdf bib
Latent Semantic Information in Maximum Entropy Language Models for Conversational Speech Recognition
Yonggang Deng | Sanjeev Khudanpur
Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Desparately Seeking Cebuano
Douglas W. Oard | David Doermann | Bonnie Dorr | Daqing He | Philip Resnik | Amy Weinberg | William Byrne | Sanjeev Khudanpur | David Yarowsky | Anton Leuski | Philipp Koehn | Kevin Knight
Companion Volume of the Proceedings of HLT-NAACL 2003 - Short Papers

2001

pdf bib
Mandarin-English Information: Investigating Translingual Speech Retrieval
Helen Meng | Berlin Chen | Sanjeev Khudanpur | Gina-Anne Levow | Wai-Kit Lo | Douglas Oard | Patrick Shone | Karen Tang | Hsin-Min Wang | Jianqiang Wang
Proceedings of the First International Conference on Human Language Technology Research

pdf bib
Robust Knowledge Discovery from Parallel Speech and Text Sources
F. Jelinek | W. Byrne | S. Khudanpur | B. Hladká | H. Ney | F. J. Och | J. Cuřín | J. Psutka
Proceedings of the First International Conference on Human Language Technology Research

2000

pdf bib
Mandarin-English Information (MEI): Investigating Translingual Speech Retrieval
Helen Meng | Sanjeev Khudanpur | Gina Levow | Douglas W. Oard | Hsin-Min Wang
ANLP-NAACL 2000 Workshop: Embedded Machine Translation Systems