2024
pdf
bib
abs
Multimodal Cross-lingual Phrase Retrieval
Chuanqi Dong
|
Wenjie Zhou
|
Xiangyu Duan
|
Yuqi Zhang
|
Min Zhang
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Cross-lingual phrase retrieval aims to retrieve parallel phrases among languages. Current approaches only deals with textual modality. There lacks multimodal data resources and explorations for multimodal cross-lingual phrase retrieval (MXPR). In this paper, we create the first MXPR data resource and propose a novel approach for MXPR to explore the effectiveness of multi-modality. The MXPR data resource is built by marrying the benchmark dataset for textual cross-lingual phrase retrieval with Wikimedia Commons, which is a media store containing tremendous texts and related images. In the built resource, the phrase pairs of the textual benchmark dataset are equipped with their related images. Based on this novel data resource, we introduce a strategy to bridge the gap between different modalities by multimodal relation generation with a large multimodal pre-trained model and consistency training. Experiments on benchmarked dataset covering eight language pairs show that our MXPR approach, which deals with multimodal phrases, performs significantly better than pure textual cross-lingual phrase retrieval.
2023
pdf
bib
abs
Easy Guided Decoding in Providing Suggestions for Interactive Machine Translation
Ke Wang
|
Xin Ge
|
Jiayi Wang
|
Yuqi Zhang
|
Yu Zhao
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Machine translation technology has made great progress in recent years, but it cannot guarantee error-free results. Human translators perform post-editing on machine translations to correct errors in the scene of computer aided translation. In favor of expediting the post-editing process, many works have investigated machine translation in interactive modes, in which machines can automatically refine the rest of translations constrained by human’s edits. Translation Suggestion (TS), as an interactive mode to assist human translators, requires machines to generate alternatives for specific incorrect words or phrases selected by human translators. In this paper, we utilize the parameterized objective function of neural machine translation (NMT) and propose a novel constrained decoding algorithm, namely Prefix-Suffix Guided Decoding (PSGD), to deal with the TS problem without additional training. Compared to state-of-the-art lexical-constrained decoding method, PSGD improves translation quality by an average of 10.6 BLEU and reduces time overhead by an average of 63.4% on benchmark datasets. Furthermore, on both the WeTS and the WMT 2022 Translation Suggestion datasets, it is superior over other supervised learning systems trained with TS annotated data.
pdf
bib
abs
Disambiguated Lexically Constrained Neural Machine Translation
Jinpeng Zhang
|
Nini Xiao
|
Ke Wang
|
Chuanqi Dong
|
Xiangyu Duan
|
Yuqi Zhang
|
Min Zhang
Findings of the Association for Computational Linguistics: ACL 2023
Lexically constrained neural machine translation (LCNMT), which controls the translation generation with pre-specified constraints, is important in many practical applications. Current approaches to LCNMT typically assume that the pre-specified lexicon constraints are contextually appropriate. This assumption limits their application to real-world scenarios where a source lexicon may have multiple target constraints, and disambiguation is needed to select the most suitable one. In this paper, we propose disambiguated LCNMT (D-LCNMT) to solve the problem. D-LCNMT is a robust and effective two-stage framework that disambiguates the constraints based on contexts at first, then integrates the disambiguated constraints into LCNMT. Experimental results show that our approach outperforms strong baselines including existing data argumentation based approaches on benchmark datasets, and comprehensive experiments in scenarios where a source lexicon corresponds to multiple target constraints demonstrate the constraint disambiguation superiority of our approach.
pdf
bib
abs
Improving Neural Machine Translation by Multi-Knowledge Integration with Prompting
Ke Wang
|
Jun Xie
|
Yuqi Zhang
|
Yu Zhao
Findings of the Association for Computational Linguistics: EMNLP 2023
Improving neural machine translation (NMT) systems with prompting has achieved significant progress in recent years. In this work, we focus on how to integrate multi-knowledge, multiple types of knowledge, into NMT models to enhance the performance with prompting. We propose a unified framework, which can integrate effectively multiple types of knowledge including sentences, terminologies/phrases and translation templates into NMT models. We utilize multiple types of knowledge as prefix-prompts of input for the encoder and decoder of NMT models to guide the translation process. The approach requires no changes to the model architecture and effectively adapts to domain-specific translation without retraining. The experiments on English-Chinese and English-German translation demonstrate that our approach significantly outperform strong baselines, achieving high translation quality and terminology match accuracy.
2022
pdf
bib
abs
Third-Party Aligner for Neural Word Alignments
Jinpeng Zhang
|
Chuanqi Dong
|
Xiangyu Duan
|
Yuqi Zhang
|
Min Zhang
Findings of the Association for Computational Linguistics: EMNLP 2022
Word alignment is to find translationally equivalent words between source and target sentences. Previous work has demonstrated that self-training can achieve competitive word alignment results. In this paper, we propose to use word alignments generated by a third-party word aligner to supervise the neural word alignment training. Specifically, source word and target word of each word pair aligned by the third-party aligner are trained to be close neighbors to each other in the contextualized embedding space when fine-tuning a pre-trained cross-lingual language model. Experiments on the benchmarks of various language pairs show that our approach can surprisingly do self-correction over the third-party supervision by finding more accurate word alignments and deleting wrong word alignments, leading to better performance than various third-party word aligners, including the currently best one. When we integrate all supervisions from various third-party aligners, we achieve state-of-the-art word alignment performances, with averagely more than two points lower alignment error rates than the best third-party aligner.We released our code at https://github.com/sdongchuanqi/Third-Party-Supervised-Aligner.
pdf
bib
abs
TSMind: Alibaba and Soochow University’s Submission to the WMT22 Translation Suggestion Task
Xin Ge
|
Ke Wang
|
Jiayi Wang
|
Nini Xiao
|
Xiangyu Duan
|
Yu Zhao
|
Yuqi Zhang
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper describes the joint submission of Alibaba and Soochow University to the WMT 2022 Shared Task on Translation Suggestion (TS). We participate in the English to/from German and English to/from Chinese tasks. Basically, we utilize the model paradigm fine-tuning on the downstream tasks based on large-scale pre-trained models, which has recently achieved great success. We choose FAIR’s WMT19 English to/from German news translation system and MBART50 for English to/from Chinese as our pre-trained models. Considering the task’s condition of limited use of training data, we follow the data augmentation strategies provided by Yang to boost our TS model performance. And we further involve the dual conditional cross-entropy model and GPT-2 language model to filter augmented data. The leader board finally shows that our submissions are ranked first in three of four language directions in the Naive TS task of the WMT22 Translation Suggestion task.
2021
pdf
bib
abs
TermMind: Alibaba’s WMT21 Machine Translation Using Terminologies Task Submission
Ke Wang
|
Shuqin Gu
|
Boxing Chen
|
Yu Zhao
|
Weihua Luo
|
Yuqi Zhang
Proceedings of the Sixth Conference on Machine Translation
This paper describes our work in the WMT 2021 Machine Translation using Terminologies Shared Task. We participate in the shared translation terminologies task in English to Chinese language pair. To satisfy terminology constraints on translation, we use a terminology data augmentation strategy based on Transformer model. We used tags to mark and add the term translations into the matched sentences. We created synthetic terms using phrase tables extracted from bilingual corpus to increase the proportion of term translations in training data. Detailed pre-processing and filtering on data, in-domain finetuning and ensemble method are used in our system. Our submission obtains competitive results in the terminology-targeted evaluation.
pdf
bib
abs
QEMind: Alibaba’s Submission to the WMT21 Quality Estimation Shared Task
Jiayi Wang
|
Ke Wang
|
Boxing Chen
|
Yu Zhao
|
Weihua Luo
|
Yuqi Zhang
Proceedings of the Sixth Conference on Machine Translation
Quality Estimation, as a crucial step of quality control for machine translation, has been explored for years. The goal is to to investigate automatic methods for estimating the quality of machine translation results without reference translations. In this year’s WMT QE shared task, we utilize the large-scale XLM-Roberta pre-trained model and additionally propose several useful features to evaluate the uncertainty of the translations to build our QE system, named QEMind . The system has been applied to the sentence-level scoring task of Direct Assessment and the binary score prediction task of Critical Error Detection. In this paper, we present our submissions to the WMT 2021 QE shared task and an extensive set of experimental results have shown us that our multilingual systems outperform the best system in the Direct Assessment QE task of WMT 2020.
pdf
bib
abs
Beyond Glass-Box Features: Uncertainty Quantification Enhanced Quality Estimation for Neural Machine Translation
Ke Wang
|
Yangbin Shi
|
Jiayi Wang
|
Yuqi Zhang
|
Yu Zhao
|
Xiaolin Zheng
Findings of the Association for Computational Linguistics: EMNLP 2021
Quality Estimation (QE) plays an essential role in applications of Machine Translation (MT). Traditionally, a QE system accepts the original source text and translation from a black-box MT system as input. Recently, a few studies indicate that as a by-product of translation, QE benefits from the model and training data’s information of the MT system where the translations come from, and it is called the “glass-box QE”. In this paper, we extend the definition of “glass-box QE” generally to uncertainty quantification with both “black-box” and “glass-box” approaches and design several features deduced from them to blaze a new trial in improving QE’s performance. We propose a framework to fuse the feature engineering of uncertainty quantification into a pre-trained cross-lingual language model to predict the translation quality. Experiment results show that our method achieves state-of-the-art performances on the datasets of WMT 2020 QE shared task.
2020
pdf
bib
abs
Alibaba’s Submission for the WMT 2020 APE Shared Task: Improving Automatic Post-Editing with Pre-trained Conditional Cross-Lingual BERT
Jiayi Wang
|
Ke Wang
|
Kai Fan
|
Yuqi Zhang
|
Jun Lu
|
Xin Ge
|
Yangbin Shi
|
Yu Zhao
Proceedings of the Fifth Conference on Machine Translation
The goal of Automatic Post-Editing (APE) is basically to examine the automatic methods for correcting translation errors generated by an unknown machine translation (MT) system. This paper describes Alibaba’s submissions to the WMT 2020 APE Shared Task for the English-German language pair. We design a two-stage training pipeline. First, a BERT-like cross-lingual language model is pre-trained by randomly masking target sentences alone. Then, an additional neural decoder on the top of the pre-trained model is jointly fine-tuned for the APE task. We also apply an imitation learning strategy to augment a reasonable amount of pseudo APE training data, potentially preventing the model to overfit on the limited real training data and boosting the performance on held-out data. To verify our proposed model and data augmentation, we examine our approach with the well-known benchmarking English-German dataset from the WMT 2017 APE task. The experiment results demonstrate that our system significantly outperforms all other baselines and achieves the state-of-the-art performance. The final results on the WMT 2020 test dataset show that our submission can achieve +5.56 BLEU and -4.57 TER with respect to the official MT baseline.
pdf
bib
abs
Alibaba Submission to the WMT20 Parallel Corpus Filtering Task
Jun Lu
|
Xin Ge
|
Yangbin Shi
|
Yuqi Zhang
Proceedings of the Fifth Conference on Machine Translation
This paper describes the Alibaba Machine Translation Group submissions to the WMT 2020 Shared Task on Parallel Corpus Filtering and Alignment. In the filtering task, three main methods are applied to evaluate the quality of the parallel corpus, i.e. a) Dual Bilingual GPT-2 model, b) Dual Conditional Cross-Entropy Model and c) IBM word alignment model. The scores of these models are combined by using a positive-unlabeled (PU) learning model and a brute-force search to obtain additional gains. Besides, a few simple but efficient rules are adopted to evaluate the quality and the diversity of the corpus. In the alignment-filtering task, the extraction pipeline of bilingual sentence pairs includes the following steps: bilingual lexicon mining, language identification, sentence segmentation and sentence alignment. The final result shows that, in both filtering and alignment tasks, our system significantly outperforms the LASER-based system.
2015
pdf
bib
The Karlsruhe Institute of Technology Translation Systems for the WMT 2015
Eunah Cho
|
Thanh-Le Ha
|
Jan Niehues
|
Teresa Herrmann
|
Mohammed Mediani
|
Yuqi Zhang
|
Alex Waibel
Proceedings of the Tenth Workshop on Statistical Machine Translation
2014
pdf
bib
abs
The KIT translation systems for IWSLT 2014
Isabel Slawik
|
Mohammed Mediani
|
Jan Niehues
|
Yuqi Zhang
|
Eunah Cho
|
Teresa Herrmann
|
Thanh-Le Ha
|
Alex Waibel
Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign
In this paper, we present the KIT systems participating in the TED translation tasks of the IWSLT 2014 machine translation evaluation. We submitted phrase-based translation systems for all three official directions, namely English→German, German→English, and English→French, as well as for the optional directions English→Chinese and English→Arabic. For the official directions we built systems both for the machine translation as well as the spoken language translation track. This year we improved our systems’ performance over last year through n-best list rescoring using neural network-based translation and language models and novel preordering rules based on tree information of multiple syntactic levels. Furthermore, we could successfully apply a novel phrase extraction algorithm and transliteration of unknown words for Arabic. We also submitted a contrastive system for German→English built with stemmed German adjectives. For the SLT tracks, we used a monolingual translation system to translate the lowercased ASR hypotheses with all punctuation stripped to truecased, punctuated output as a preprocessing step to our usual translation system.
pdf
bib
abs
Rule-based preordering on multiple syntactic levels in statistical machine translation
Ge Wu
|
Yuqi Zhang
|
Alexander Waibel
Proceedings of the 11th International Workshop on Spoken Language Translation: Papers
We propose a novel data-driven rule-based preordering approach, which uses the tree information of multiple syntactic levels. This approach extend the tree-based reordering from one level into multiple levels, which has the capability to process more complicated reordering cases. We have conducted experiments in English-to-Chinese and Chinese-to-English translation directions. Our results show that the approach has led to improved translation quality both when it was applied separately or when it was combined with some other reordering approaches. As our reordering approach was used alone, it showed an improvement of 1.61 in BLEU score in the English-to-Chinese translation direction and an improvement of 2.16 in BLEU score in the Chinese-to-English translation direction, in comparison with the baseline, which used no word reordering. As our preordering approach were combined with the short rule [1], long rule [2] and tree rule [3] based preordering approaches, it showed further improvements of up to 0.43 in BLEU score in the English-to-Chinese translation direction and further improvements of up to 0.3 in BLEU score in the Chinese-to-English translation direction. Through the translations that used our preordering approach, we have also found many translation examples with improved syntactic structures.
pdf
bib
The Karlsruhe Institute of Technology Translation Systems for the WMT 2014
Teresa Herrmann
|
Mohammed Mediani
|
Eunah Cho
|
Thanh-Le Ha
|
Jan Niehues
|
Isabel Slawik
|
Yuqi Zhang
|
Alex Waibel
Proceedings of the Ninth Workshop on Statistical Machine Translation
2013
pdf
bib
Measuring the Structural Importance through Rhetorical Structure Index
Narine Kokhlikyan
|
Alex Waibel
|
Yuqi Zhang
|
Joy Ying Zhang
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
pdf
bib
abs
The KIT translation systems for IWSLT 2013
Than-Le Ha
|
Teresa Herrmann
|
Jan Niehues
|
Mohammed Mediani
|
Eunah Cho
|
Yuqi Zhang
|
Isabel Slawik
|
Alex Waibel
Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign
In this paper, we present the KIT systems participating in all three official directions, namely English→German, German→English, and English→French, in translation tasks of the IWSLT 2013 machine translation evaluation. Additionally, we present the results for our submissions to the optional directions English→Chinese and English→Arabic. We used phrase-based translation systems to generate the translations. This year, we focused on adapting the systems towards ASR input. Furthermore, we investigated different reordering models as well as an extended discriminative word lexicon. Finally, we added a data selection approach for domain adaptation.
2012
pdf
bib
abs
The KIT translation systems for IWSLT 2012
Mohammed Mediani
|
Yuqi Zhang
|
Thanh-Le Ha
|
Jan Niehues
|
Eunach Cho
|
Teresa Herrmann
|
Rainer Kärgel
|
Alexander Waibel
Proceedings of the 9th International Workshop on Spoken Language Translation: Evaluation Campaign
In this paper, we present the KIT systems participating in the English-French TED Translation tasks in the framework of the IWSLT 2012 machine translation evaluation. We also present several additional experiments on the English-German, English-Chinese and English-Arabic translation pairs. Our system is a phrase-based statistical machine translation system, extended with many additional models which were proven to enhance the translation quality. For instance, it uses the part-of-speech (POS)-based reordering, translation and language model adaptation, bilingual language model, word-cluster language model, discriminative word lexica (DWL), and continuous space language model. In addition to this, the system incorporates special steps in the preprocessing and in the post-processing step. In the preprocessing the noisy corpora are filtered by removing the noisy sentence pairs, whereas in the postprocessing the agreement between a noun and its surrounding words in the French translation is corrected based on POS tags with morphological information. Our system deals with speech transcription input by removing case information and punctuation except periods from the text translation model.
pdf
bib
The Karlsruhe Institute of Technology Translation Systems for the WMT 2012
Jan Niehues
|
Yuqi Zhang
|
Mohammed Mediani
|
Teresa Herrmann
|
Eunah Cho
|
Alex Waibel
Proceedings of the Seventh Workshop on Statistical Machine Translation
2009
pdf
bib
Are Unaligned Words Important for Machine Translation?
Yuqi Zhang
|
Evgeny Matusov
|
Hermann Ney
Proceedings of the 13th Annual Conference of the European Association for Machine Translation
2008
pdf
bib
abs
The RWTH machine translation system for IWSLT 2008.
David Vilar
|
Daniel Stein
|
Yuqi Zhang
|
Evgeny Matusov
|
Arne Mauser
|
Oliver Bender
|
Saab Mansour
|
Hermann Ney
Proceedings of the 5th International Workshop on Spoken Language Translation: Evaluation Campaign
RWTH’s system for the 2008 IWSLT evaluation consists of a combination of different phrase-based and hierarchical statistical machine translation systems. We participated in the translation tasks for the Chinese-to-English and Arabic-to-English language pairs. We investigated different preprocessing techniques, reordering methods for the phrase-based system, including reordering of speech lattices, and syntax-based enhancements for the hierarchical systems. We also tried the combination of the Arabic-to-English and Chinese-to-English outputs as an additional submission.
2007
pdf
bib
Chunk-Level Reordering of Source Language Sentences with Automatically Learned Rules for Statistical Machine Translation
Yuqi Zhang
|
Richard Zens
|
Hermann Ney
Proceedings of SSST, NAACL-HLT 2007 / AMTA Workshop on Syntax and Structure in Statistical Translation
pdf
bib
abs
Improved chunk-level reordering for statistical machine translation
Yuqi Zhang
|
Richard Zens
|
Hermann Ney
Proceedings of the Fourth International Workshop on Spoken Language Translation
Inspired by previous chunk-level reordering approaches to statistical machine translation, this paper presents two methods to improve the reordering at the chunk level. By introducing a new lattice weighting factor and by reordering the training source data, an improvement is reported on TER and BLEU. Compared to the previous chunklevel reordering approach, the BLEU score improves 1.4% absolutely. The translation results are reported on IWSLT Chinese-English task.
pdf
bib
abs
The RWTH machine translation system for IWSLT 2007
Arne Mauser
|
David Vilar
|
Gregor Leusch
|
Yuqi Zhang
|
Hermann Ney
Proceedings of the Fourth International Workshop on Spoken Language Translation
The RWTH system for the IWSLT 2007 evaluation is a combination of several statistical machine translation systems. The combination includes Phrase-Based models, a n-gram translation model and a hierarchical phrase model. We describe the individual systems and the method that was used for combining the system outputs. Compared to our 2006 system, we newly introduce a hierarchical phrase-based translation model and show improvements in system combination for Machine Translation. RWTH participated in the Italian-to-English and Chinese-to-English translation directions.
2005
pdf
bib
The RWTH Phrase-based Statistical Machine Translation System
Richard Zens
|
Oliver Bender
|
Sasa Hasan
|
Shahram Khadivi
|
Evgeny Matusov
|
Jia Xu
|
Yuqi Zhang
|
Hermann Ney
Proceedings of the Second International Workshop on Spoken Language Translation
2002
pdf
bib
Chinese Base-Phrases Chunking
Yuqi Zhang
|
Qiang Zhou
COLING-02: The First SIGHAN Workshop on Chinese Language Processing