2024
pdf
bib
abs
HW-TSC’s Submissions To the IWSLT2024 Low-resource Speech Translation Tasks
Zheng Jiawei
|
Hengchao Shang
|
Zongyao Li
|
Zhanglin Wu
|
Daimeng Wei
|
Zhiqiang Rao
|
Shaojun Li
|
Jiaxin Guo
|
Bin Wei
|
Yuanchang Luo
|
Hao Yang
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)
In this work, we submitted our systems to the low-resource track of the IWSLT 2024 Speech Translation Campaign. Our systems tackled the unconstrained condition of the Dialectal Arabic North Levantine (ISO-3 code: apc) to English language pair. We proposed a cascaded solution consisting of an automatic speech recognition (ASR) model and a machine translation (MT) model. It was noted that the ASR model employed the pre-trained Whisper-large-v3 model to process the speech data, while the MT model adopted the Transformer architecture. To improve the quality of the MT model, it was stated that our system utilized not only the data provided by the competition but also an additional 54 million parallel sentences. Ultimately, we reported that our final system achieved a BLEU score of 24.7 for apc-to-English translation.
pdf
bib
abs
Machine Translation Advancements of Low-Resource Indian Languages by Transfer Learning
Bin Wei
|
Zheng Jiawei
|
Zongyao Li
|
Zhanglin Wu
|
Jiaxin Guo
|
Daimeng Wei
|
Zhiqiang Rao
|
Shaojun Li
|
Yuanchang Luo
|
Hengchao Shang
|
Jinlong Yang
|
Yuhao Xie
|
Hao Yang
Proceedings of the Ninth Conference on Machine Translation
This paper introduces the submission by Huawei Translation Center (HW-TSC) to the WMT24 Indian Languages Machine Translation (MT) Shared Task. To develop a reliable machine translation system for low-resource Indian languages, we employed two distinct knowledge transfer strategies, taking into account the characteristics of the language scripts and the support available from existing open-source models for Indian languages. For Assamese(as) and Manipuri(mn), we fine-tuned the existing IndicTrans2 open-source model to enable bidirectional translation between English and these languages. For Khasi(kh) and Mizo(mz), we trained a multilingual model as the baseline using bilingual data from this four language pairs as well as additional Bengali data, which share the same language family. This was followed by fine-tuning to achieve bidirectional translation between English and Khasi, as well as English and Mizo. Our transfer learning experiments produced significant results: 23.5 BLEU for en→as, 31.8 BLEU for en→mn, 36.2 BLEU for as→en, and 47.9 BLEU for mn→en on their respective test sets. Similarly, the multilingual model transfer learning experiments yielded impressive outcomes, achieving 19.7 BLEU for en→kh, 32.8 BLEU for en→mz, 16.1 BLEU for kh→en, and 33.9 BLEU for mz→en on their respective test sets. These results not only highlight the effectiveness of transfer learning techniques for low-resource languages but also contribute to advancing machine translation capabilities for low-resource Indian languages.
pdf
bib
abs
Multilingual Transfer and Domain Adaptation for Low-Resource Languages of Spain
Yuanchang Luo
|
Zhanglin Wu
|
Daimeng Wei
|
Hengchao Shang
|
Zongyao Li
|
Jiaxin Guo
|
Zhiqiang Rao
|
Shaojun Li
|
Jinlong Yang
|
Yuhao Xie
|
Zheng Jiawei
|
Bin Wei
|
Hao Yang
Proceedings of the Ninth Conference on Machine Translation
This article introduces the submission status of the Translation into Low-Resource Languages of Spain task at (WMT 2024) by Huawei Translation Service Center (HW-TSC). We participated in three translation tasks: spanish to aragonese (es2arg), spanish to aranese (es2arn), and spanish to asturian (es2ast). For these three translation tasks, we use training strategies such as multilingual transfer, regularized dropout, forward translation and back translation, labse denoising, transduction ensemble learning and other strategies to neural machine translation (NMT) model based on training deep transformer-big architecture. By using these enhancement strategies, our submission achieved a competitive result in the final evaluation.
pdf
bib
abs
Exploring the Traditional NMT Model and Large Language Model for Chat Translation
Jinlong Yang
|
Hengchao Shang
|
Daimeng Wei
|
Jiaxin Guo
|
Zongyao Li
|
Zhanglin Wu
|
Zhiqiang Rao
|
Shaojun Li
|
Yuhao Xie
|
Yuanchang Luo
|
Zheng Jiawei
|
Bin Wei
|
Hao Yang
Proceedings of the Ninth Conference on Machine Translation
This paper describes the submissions of Huawei Translation Services Center(HW-TSC) to WMT24 chat translation shared task on English↔Germany (en-de) bidirection. The experiments involved fine-tuning models using chat data and exploring various strategies, including Minimum Bayesian Risk (MBR) decoding and self-training. The results show significant performance improvements in certain directions, with the MBR self-training method achieving the best results. The Large Language Model also discusses the challenges and potential avenues for further research in the field of chat translation.