Ningthoujam Justwant Singh
2024
WMT24 System Description for the MultiIndic22MT Shared Task on Manipuri Language
Ningthoujam Justwant Singh
|
Kshetrimayum Boynao Singh
|
Ningthoujam Avichandra Singh
|
Sanjita Phijam
|
Thoudam Doren Singh
Proceedings of the Ninth Conference on Machine Translation
This paper presents a Transformer-based Neural Machine Translation (NMT) system developed by the Centre for Natural Language Processing and the Department of Computer Science and Engineering at the National Institute of Technology Silchar, India (NITS-CNLP) for the MultiIndic22MT 2024 Shared Task. The system focused on the English-Manipuri language pair for the WMT24 shared task. The proposed WMT system shows a BLEU score of 6.4, a chrF score of 28.6, and a chrF++ score of 26.6 on the public test set Indic-Conv dataset. Further, in the public test set Indic-Gen dataset, it achieved a BLEU score of 8.1, a chrF score of 32.1, and a chrF++ score of 29.4 on the English-to-Manipuri translation.
2023
A comparative study of transformer and transfer learning MT models for English-Manipuri
Kshetrimayum Boynao Singh
|
Ningthoujam Avichandra Singh
|
Loitongbam Sanayai Meetei
|
Ningthoujam Justwant Singh
|
Thoudam Doren Singh
|
Sivaji Bandyopadhyay
Proceedings of the 20th International Conference on Natural Language Processing (ICON)
In this work, we focus on the development of machine translation (MT) models of a lowresource language pair viz. English-Manipuri. Manipuri is one of the eight scheduled languages of the Indian constitution. Manipuri is currently written in two different scripts: one is its original script called Meitei Mayek and the other is the Bengali script. We evaluate the performance of English-Manipuri MT models based on transformer and transfer learning technique. Our MT models are trained using a dataset of 69,065 parallel sentences and validated on 500 sentences. Using 500 test sentences, the English to Manipuri MT models achieved a BLEU score of 19.13 and 29.05 with mT5 and OpenNMT respectively. The results demonstrate that the OpenNMT model significantly outperforms the mT5 model. Additionally, Manipuri to English MT system trained with OpenNMT model reported a BLEU score of 30.90. We also carried out a comparative analysis between the Bengali script and the transliterated Meitei Mayek script for English-Manipuri MT models. This analysis reveals that the transliterated version enhances the MT model performance resulting in a notable +2.35 improvement in the BLEU score.
Search