2024
pdf
bib
abs
Findings of WMT2024 English-to-Low Resource Multimodal Translation Task
Shantipriya Parida
|
Ondřej Bojar
|
Idris Abdulmumin
|
Shamsuddeen Hassan Muhammad
|
Ibrahim Said Ahmad
Proceedings of the Ninth Conference on Machine Translation
This paper presents the results of the English-to-Low Resource Multimodal Translation shared tasks from the Ninth Conference on Machine Translation (WMT2024). This year, 7 teams submitted their translation results for the automatic and human evaluation.
pdf
bib
abs
OdiaGenAI’s Participation in WMT2024 English-to-Low Resource Multimodal Translation Task
Shantipriya Parida
|
Shashikanta Sahoo
|
Sambit Sekhar
|
Upendra Jena
|
Sushovan Jena
|
Kusum Lata
Proceedings of the Ninth Conference on Machine Translation
This paper covers the system description of the team “ODIAGEN’s” submission to the WMT~2024 English-to-Low-Resource Multimodal Translation Task. We participated in the English-to-Low Resource Multimodal Translation Task, in two of the tasks, i.e. Text-only Translation and Multi-modal Translation. For Text-only Translation, we trained the Mistral-7B model for English to Multi-lingual (Hindi, Bengali, Malayalam, Hausa). For Multi-modal Translation (using both image and text), we trained the PaliGemma-3B model for English to Hindi translation.
2023
pdf
bib
abs
HaVQA: A Dataset for Visual Question Answering and Multimodal Research in Hausa Language
Shantipriya Parida
|
Idris Abdulmumin
|
Shamsuddeen Hassan Muhammad
|
Aneesh Bose
|
Guneet Singh Kohli
|
Ibrahim Said Ahmad
|
Ketan Kotwal
|
Sayan Deb Sarkar
|
Ondřej Bojar
|
Habeebah Kakudi
Findings of the Association for Computational Linguistics: ACL 2023
This paper presents “HaVQA”, the first multimodal dataset for visual question answering (VQA) tasks in the Hausa language. The dataset was created by manually translating 6,022 English question-answer pairs, which are associated with 1,555 unique images from the Visual Genome dataset. As a result, the dataset provides 12,044 gold standard English-Hausa parallel sentences that were translated in a fashion that guarantees their semantic match with the corresponding visual information. We conducted several baseline experiments on the dataset, including visual question answering, visual question elicitation, text-only and multimodal machine translation.
pdf
bib
Proceedings of the 10th Workshop on Asian Translation
Toshiaki Nakazawa
|
Kazutaka Kinugawa
|
Hideya Mino
|
Isao Goto
|
Raj Dabre
|
Shohei Higashiyama
|
Shantipriya Parida
|
Makoto Morishita
|
Ondrej Bojar
|
Akiko Eriguchi
|
Yusuke Oda
|
Akiko Eriguchi
|
Chenhui Chu
|
Sadao Kurohashi
Proceedings of the 10th Workshop on Asian Translation
pdf
bib
abs
Overview of the 10th Workshop on Asian Translation
Toshiaki Nakazawa
|
Kazutaka Kinugawa
|
Hideya Mino
|
Isao Goto
|
Raj Dabre
|
Shohei Higashiyama
|
Shantipriya Parida
|
Makoto Morishita
|
Ondřej Bojar
|
Akiko Eriguchi
|
Yusuke Oda
|
Chenhui Chu
|
Sadao Kurohashi
Proceedings of the 10th Workshop on Asian Translation
This paper presents the results of the shared tasks from the 10th workshop on Asian translation (WAT2023). For the WAT2023, 2 teams submitted their translation results for the human evaluation. We also accepted 1 research paper. About 40 translation results were submitted to the automatic evaluation server, and selected submissions were manually evaluated.
pdf
bib
abs
OdiaGenAI’s Participation at WAT2023
Sk Shahid
|
Guneet Singh Kohli
|
Sambit Sekhar
|
Debasish Dhal
|
Adit Sharma
|
Shubhendra Kushwaha
|
Shantipriya Parida
|
Stig-Arne Grönroos
|
Satya Ranjan Dash
Proceedings of the 10th Workshop on Asian Translation
This paper offers an in-depth overview of the team “ODIAGEN’s” translation system submitted to the Workshop on Asian Translation (WAT2023). Our focus lies in the domain of Indic Multimodal tasks, specifically targeting English to Hindi, English to Malayalam, and English to Bengali translations. The system uses a state-of-the-art Transformer-based architecture, specifically the NLLB-200 model, fine-tuned with language-specific Visual Genome Datasets. With this robust system, we were able to manage both text-to-text and multimodal translations, demonstrating versatility in handling different translation modes. Our results showcase strong performance across the board, with particularly promising results in the Hindi and Bengali translation tasks. A noteworthy achievement of our system lies in its stellar performance across all text-to-text translation tasks. In the categories of English to Hindi, English to Bengali, and English to Malayalam translations, our system claimed the top positions for both the evaluation and challenge sets. This system not only advances our understanding of the challenges and nuances of Indic language translation but also opens avenues for future research to enhance translation accuracy and performance.
2022
pdf
bib
abs
Overview of the 9th Workshop on Asian Translation
Toshiaki Nakazawa
|
Hideya Mino
|
Isao Goto
|
Raj Dabre
|
Shohei Higashiyama
|
Shantipriya Parida
|
Anoop Kunchukuttan
|
Makoto Morishita
|
Ondřej Bojar
|
Chenhui Chu
|
Akiko Eriguchi
|
Kaori Abe
|
Yusuke Oda
|
Sadao Kurohashi
Proceedings of the 9th Workshop on Asian Translation
This paper presents the results of the shared tasks from the 9th workshop on Asian translation (WAT2022). For the WAT2022, 8 teams submitted their translation results for the human evaluation. We also accepted 4 research papers. About 300 translation results were submitted to the automatic evaluation server, and selected submissions were manually evaluated.
pdf
bib
abs
Silo NLP’s Participation at WAT2022
Shantipriya Parida
|
Subhadarshi Panda
|
Stig-Arne Grönroos
|
Mark Granroth-Wilding
|
Mika Koistinen
Proceedings of the 9th Workshop on Asian Translation
This paper provides the system description of “Silo NLP’s” submission to the Workshop on Asian Translation (WAT2022). We have participated in the Indic Multimodal tasks (English->Hindi, English->Malayalam, and English->Bengali, Multimodal Translation). For text-only translation, we used the Transformer and fine-tuned the mBART. For multimodal translation, we used the same architecture and extracted object tags from the images to use as visual features concatenated with the text sequence for input. Our submission tops many tasks including English->Hindi multimodal translation (evaluation test), English->Malayalam text-only and multimodal translation (evaluation test), English->Bengali multimodal translation (challenge test), and English->Bengali text-only translation (evaluation test).
pdf
bib
abs
Hausa Visual Genome: A Dataset for Multi-Modal English to Hausa Machine Translation
Idris Abdulmumin
|
Satya Ranjan Dash
|
Musa Abdullahi Dawud
|
Shantipriya Parida
|
Shamsuddeen Muhammad
|
Ibrahim Sa’id Ahmad
|
Subhadarshi Panda
|
Ondřej Bojar
|
Bashir Shehu Galadanci
|
Bello Shehu Bello
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Multi-modal Machine Translation (MMT) enables the use of visual information to enhance the quality of translations, especially where the full context is not available to enable the unambiguous translation in standard machine translation. Despite the increasing popularity of such technique, it lacks sufficient and qualitative datasets to maximize the full extent of its potential. Hausa, a Chadic language, is a member of the Afro-Asiatic language family. It is estimated that about 100 to 150 million people speak the language, with more than 80 million indigenous speakers. This is more than any of the other Chadic languages. Despite the large number of speakers, the Hausa language is considered as a low resource language in natural language processing (NLP). This is due to the absence of enough resources to implement most of the tasks in NLP. While some datasets exist, they are either scarce, machine-generated or in the religious domain. Therefore, there is the need to create training and evaluation data for implementing machine learning tasks and bridging the research gap in the language. This work presents the Hausa Visual Genome (HaVG), a dataset that contains the description of an image or a section within the image in Hausa and its equivalent in English. The dataset was prepared by automatically translating the English description of the images in the Hindi Visual Genome (HVG). The synthetic Hausa data was then carefully postedited, taking into cognizance the respective images. The data is made of 32,923 images and their descriptions that are divided into training, development, test, and challenge test set. The Hausa Visual Genome is the first dataset of its kind and can be used for Hausa-English machine translation, multi-modal research, image description, among various other natural language processing and generation tasks.
pdf
bib
abs
Universal Dependency Treebank for Odia Language
Shantipriya Parida
|
Kalyanamalini Shabadi
|
Atul Kr. Ojha
|
Saraswati Sahoo
|
Satya Ranjan Dash
|
Bijayalaxmi Dash
Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference
This paper presents the first publicly available treebank of Odia, a morphologically rich low resource Indian language. The treebank contains approx. 1082 tokens (100 sentences) in Odia were selected from “Samantar”, the largest available parallel corpora collection for Indic languages. All the selected sentences are manually annotated following the “Universal Dependency” guidelines. The morphological analysis of the Odia treebank was performed using machine learning techniques. The Odia annotated treebank will enrich the Odia language resource and will help in building language technology tools for cross-lingual learning and typological research. We also build a preliminary Odia parser using a machine learning approach. The accuracy of the parser is 86.6% Tokenization, 64.1% UPOS, 63.78% XPOS, 42.04% UAS and 21.34% LAS. Finally, the paper briefly discusses the linguistic analysis of the Odia UD treebank.
2021
pdf
bib
abs
Open Machine Translation for Low Resource South American Languages (AmericasNLP 2021 Shared Task Contribution)
Shantipriya Parida
|
Subhadarshi Panda
|
Amulya Dash
|
Esau Villatoro-Tello
|
A. Seza Doğruöz
|
Rosa M. Ortega-Mendoza
|
Amadeo Hernández
|
Yashvardhan Sharma
|
Petr Motlicek
Proceedings of the First Workshop on Natural Language Processing for Indigenous Languages of the Americas
This paper describes the team (“Tamalli”)’s submission to AmericasNLP2021 shared task on Open Machine Translation for low resource South American languages. Our goal was to evaluate different Machine Translation (MT) techniques, statistical and neural-based, under several configuration settings. We obtained the second-best results for the language pairs “Spanish-Bribri”, “Spanish-Asháninka”, and “Spanish-Rarámuri” in the category “Development set not used for training”. Our performed experiments will serve as a point of reference for researchers working on MT with low-resource languages.
pdf
bib
Proceedings of the 8th Workshop on Asian Translation (WAT2021)
Toshiaki Nakazawa
|
Hideki Nakayama
|
Isao Goto
|
Hideya Mino
|
Chenchen Ding
|
Raj Dabre
|
Anoop Kunchukuttan
|
Shohei Higashiyama
|
Hiroshi Manabe
|
Win Pa Pa
|
Shantipriya Parida
|
Ondřej Bojar
|
Chenhui Chu
|
Akiko Eriguchi
|
Kaori Abe
|
Yusuke Oda
|
Katsuhito Sudoh
|
Sadao Kurohashi
|
Pushpak Bhattacharyya
Proceedings of the 8th Workshop on Asian Translation (WAT2021)
pdf
bib
abs
Overview of the 8th Workshop on Asian Translation
Toshiaki Nakazawa
|
Hideki Nakayama
|
Chenchen Ding
|
Raj Dabre
|
Shohei Higashiyama
|
Hideya Mino
|
Isao Goto
|
Win Pa Pa
|
Anoop Kunchukuttan
|
Shantipriya Parida
|
Ondřej Bojar
|
Chenhui Chu
|
Akiko Eriguchi
|
Kaori Abe
|
Yusuke Oda
|
Sadao Kurohashi
Proceedings of the 8th Workshop on Asian Translation (WAT2021)
This paper presents the results of the shared tasks from the 8th workshop on Asian translation (WAT2021). For the WAT2021, 28 teams participated in the shared tasks and 24 teams submitted their translation results for the human evaluation. We also accepted 5 research papers. About 2,100 translation results were submitted to the automatic evaluation server, and selected submissions were manually evaluated.
pdf
bib
abs
NLPHut’s Participation at WAT2021
Shantipriya Parida
|
Subhadarshi Panda
|
Ketan Kotwal
|
Amulya Ratna Dash
|
Satya Ranjan Dash
|
Yashvardhan Sharma
|
Petr Motlicek
|
Ondřej Bojar
Proceedings of the 8th Workshop on Asian Translation (WAT2021)
This paper provides the description of shared tasks to the WAT 2021 by our team “NLPHut”. We have participated in the English→Hindi Multimodal translation task, English→Malayalam Multimodal translation task, and Indic Multi-lingual translation task. We have used the state-of-the-art Transformer model with language tags in different settings for the translation task and proposed a novel “region-specific” caption generation approach using a combination of image CNN and LSTM for the Hindi and Malayalam image captioning. Our submission tops in English→Malayalam Multimodal translation task (text-only translation, and Malayalam caption), and ranks second-best in English→Hindi Multimodal translation task (text-only translation, and Hindi caption). Our submissions have also performed well in the Indic Multilingual translation tasks.
pdf
bib
abs
Multimodal Neural Machine Translation System for English to Bengali
Shantipriya Parida
|
Subhadarshi Panda
|
Satya Prakash Biswal
|
Ketan Kotwal
|
Arghyadeep Sen
|
Satya Ranjan Dash
|
Petr Motlicek
Proceedings of the First Workshop on Multimodal Machine Translation for Low Resource Languages (MMTLRL 2021)
Multimodal Machine Translation (MMT) systems utilize additional information from other modalities beyond text to improve the quality of machine translation (MT). The additional modality is typically in the form of images. Despite proven advantages, it is indeed difficult to develop an MMT system for various languages primarily due to the lack of a suitable multimodal dataset. In this work, we develop an MMT for English-> Bengali using a recently published Bengali Visual Genome (BVG) dataset that contains images with associated bilingual textual descriptions. Through a comparative study of the developed MMT system vis-a-vis a Text-to-text translation, we demonstrate that the use of multimodal data not only improves the translation performance improvement in BLEU score of +1.3 on the development set, +3.9 on the evaluation test, and +0.9 on the challenge test set but also helps to resolve ambiguities in the pure text description. As per best of our knowledge, our English-Bengali MMT system is the first attempt in this direction, and thus, can act as a baseline for the subsequent research in MMT for low resource languages.
2020
pdf
bib
abs
OdiEnCorp 2.0: Odia-English Parallel Corpus for Machine Translation
Shantipriya Parida
|
Satya Ranjan Dash
|
Ondřej Bojar
|
Petr Motlicek
|
Priyanka Pattnaik
|
Debasish Kumar Mallick
Proceedings of the WILDRE5– 5th Workshop on Indian Language Data: Resources and Evaluation
The preparation of parallel corpora is a challenging task, particularly for languages that suffer from under-representation in the digital world. In a multi-lingual country like India, the need for such parallel corpora is stringent for several low-resource languages. In this work, we provide an extended English-Odia parallel corpus, OdiEnCorp 2.0, aiming particularly at Neural Machine Translation (NMT) systems which will help translate English↔Odia. OdiEnCorp 2.0 includes existing English-Odia corpora and we extended the collection by several other methods of data acquisition: parallel data scraping from many websites, including Odia Wikipedia, but also optical character recognition (OCR) to extract parallel data from scanned images. Our OCR-based data extraction approach for building a parallel corpus is suitable for other low resource languages that lack in online content. The resulting OdiEnCorp 2.0 contains 98,302 sentences and 1.69 million English and 1.47 million Odia tokens. To the best of our knowledge, OdiEnCorp 2.0 is the largest Odia-English parallel corpus covering different domains and available freely for non-commercial and research purposes.
pdf
bib
abs
BertAA : BERT fine-tuning for Authorship Attribution
Maël Fabien
|
Esau Villatoro-Tello
|
Petr Motlicek
|
Shantipriya Parida
Proceedings of the 17th International Conference on Natural Language Processing (ICON)
Identifying the author of a given text can be useful in historical literature, plagiarism detection, or police investigations. Authorship Attribution (AA) has been well studied and mostly relies on a large feature engineering work. More recently, deep learning-based approaches have been explored for Authorship Attribution (AA). In this paper, we introduce BertAA, a fine-tuning of a pre-trained BERT language model with an additional dense layer and a softmax activation to perform authorship classification. This approach reaches competitive performances on Enron Email, Blog Authorship, and IMDb (and IMDb62) datasets, up to 5.3% (relative) above current state-of-the-art approaches. We performed an exhaustive analysis allowing to identify the strengths and weaknesses of the proposed method. In addition, we evaluate the impact of including additional features (e.g. stylometric and hybrid features) in an ensemble approach, improving the macro-averaged F1-Score by 2.7% (relative) on average.
pdf
bib
abs
Detection of Similar Languages and Dialects Using Deep Supervised Autoencoder
Shantipriya Parida
|
Esau Villatoro-Tello
|
Sajit Kumar
|
Maël Fabien
|
Petr Motlicek
Proceedings of the 17th International Conference on Natural Language Processing (ICON)
Language detection is considered a difficult task especially for similar languages, varieties, and dialects. With the growing number of online content in different languages, the need for reliable and robust language detection tools also increased. In this work, we use supervised autoencoders with a bayesian optimizer for language detection and highlights its efficiency in detecting similar languages with dialect variance in comparison to other state-of-the-art techniques. We evaluated our approach on multiple datasets (Ling10, Discriminating between Similar Language (DSL), and Indo-Aryan Language Identification (ILI)). Obtained results demonstrate that SAE are higly effective in detecting languages, up to a 100% accuracy in the Ling10. Similarly, we obtain a competitive performance in identifying similar languages, and dialects, 92% and 85% for DSL ans ILI datasets respectively.
pdf
bib
Proceedings of the 7th Workshop on Asian Translation
Toshiaki Nakazawa
|
Hideki Nakayama
|
Chenchen Ding
|
Raj Dabre
|
Anoop Kunchukuttan
|
Win Pa Pa
|
Ondřej Bojar
|
Shantipriya Parida
|
Isao Goto
|
Hidaya Mino
|
Hiroshi Manabe
|
Katsuhito Sudoh
|
Sadao Kurohashi
|
Pushpak Bhattacharyya
Proceedings of the 7th Workshop on Asian Translation
pdf
bib
abs
Overview of the 7th Workshop on Asian Translation
Toshiaki Nakazawa
|
Hideki Nakayama
|
Chenchen Ding
|
Raj Dabre
|
Shohei Higashiyama
|
Hideya Mino
|
Isao Goto
|
Win Pa Pa
|
Anoop Kunchukuttan
|
Shantipriya Parida
|
Ondřej Bojar
|
Sadao Kurohashi
Proceedings of the 7th Workshop on Asian Translation
This paper presents the results of the shared tasks from the 7th workshop on Asian translation (WAT2020). For the WAT2020, 20 teams participated in the shared tasks and 14 teams submitted their translation results for the human evaluation. We also received 12 research paper submissions out of which 7 were accepted. About 500 translation results were submitted to the automatic evaluation server, and selected submissions were manually evaluated.
pdf
bib
abs
ODIANLP’s Participation in WAT2020
Shantipriya Parida
|
Petr Motlicek
|
Amulya Ratna Dash
|
Satya Ranjan Dash
|
Debasish Kumar Mallick
|
Satya Prakash Biswal
|
Priyanka Pattnaik
|
Biranchi Narayan Nayak
|
Ondřej Bojar
Proceedings of the 7th Workshop on Asian Translation
This paper describes the ODIANLP submission to WAT 2020. We have participated in the English-Hindi Multimodal task and Indic task. We have used the state-of-the-art Transformer model for the translation task and InceptionResNetV2 for the Hindi Image Captioning task. Our submission tops in English->Hindi Multimodal task in its track and Odia<->English translation tasks. Also, our submissions performed well in the Indic Multilingual tasks.
2019
pdf
bib
abs
Abstract Text Summarization: A Low Resource Challenge
Shantipriya Parida
|
Petr Motlicek
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Text summarization is considered as a challenging task in the NLP community. The availability of datasets for the task of multilingual text summarization is rare, and such datasets are difficult to construct. In this work, we build an abstract text summarizer for the German language text using the state-of-the-art “Transformer” model. We propose an iterative data augmentation approach which uses synthetic data along with the real summarization data for the German language. To generate synthetic data, the Common Crawl (German) dataset is exploited, which covers different domains. The synthetic data is effective for the low resource condition and is particularly helpful for our multilingual scenario where availability of summarizing data is still a challenging issue. The data are also useful in deep learning scenarios where the neural models require a large amount of training data for utilization of its capacity. The obtained summarization performance is measured in terms of ROUGE and BLEU score. We achieve an absolute improvement of +1.5 and +16.0 in ROUGE1 F1 (R1_F1) on the development and test sets, respectively, compared to the system which does not rely on data augmentation.
pdf
bib
Proceedings of the 6th Workshop on Asian Translation
Toshiaki Nakazawa
|
Chenchen Ding
|
Raj Dabre
|
Anoop Kunchukuttan
|
Nobushige Doi
|
Yusuke Oda
|
Ondřej Bojar
|
Shantipriya Parida
|
Isao Goto
|
Hidaya Mino
Proceedings of the 6th Workshop on Asian Translation
pdf
bib
abs
Overview of the 6th Workshop on Asian Translation
Toshiaki Nakazawa
|
Nobushige Doi
|
Shohei Higashiyama
|
Chenchen Ding
|
Raj Dabre
|
Hideya Mino
|
Isao Goto
|
Win Pa Pa
|
Anoop Kunchukuttan
|
Yusuke Oda
|
Shantipriya Parida
|
Ondřej Bojar
|
Sadao Kurohashi
Proceedings of the 6th Workshop on Asian Translation
This paper presents the results of the shared tasks from the 6th workshop on Asian translation (WAT2019) including Ja↔En, Ja↔Zh scientific paper translation subtasks, Ja↔En, Ja↔Ko, Ja↔En patent translation subtasks, Hi↔En, My↔En, Km↔En, Ta↔En mixed domain subtasks and Ru↔Ja news commentary translation task. For the WAT2019, 25 teams participated in the shared tasks. We also received 10 research paper submissions out of which 61 were accepted. About 400 translation results were submitted to the automatic evaluation server, and selected submis- sions were manually evaluated.
pdf
bib
abs
Idiap NMT System for WAT 2019 Multimodal Translation Task
Shantipriya Parida
|
Ondřej Bojar
|
Petr Motlicek
Proceedings of the 6th Workshop on Asian Translation
This paper describes the Idiap submission to WAT 2019 for the English-Hindi Multi-Modal Translation Task. We have used the state-of-the-art Transformer model and utilized the IITB English-Hindi parallel corpus as an additional data source. Among the different tracks of the multi-modal task, we have participated in the “Text-Only” track for the evaluation and challenge test sets. Our submission tops in its track among the competitors in terms of both automatic and manual evaluation. Based on automatic scores, our text-only submission also outperforms systems that consider visual information in the “multi-modal translation” task.
2018
pdf
bib
CUNI NMT System for WAT 2018 Translation Tasks
Tom Kocmi
|
Shantipriya Parida
|
Ond?ej Bojar
Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation: 5th Workshop on Asian Translation: 5th Workshop on Asian Translation
pdf
bib
abs
Translating Short Segments with NMT: A Case Study in English-to-Hindi
Shantipriya Parida
|
Ondřej Bojar
Proceedings of the 21st Annual Conference of the European Association for Machine Translation
This paper presents a case study in translating short image captions of the Visual Genome dataset from English into Hindi using out-of-domain data sets of varying size. We experiment with three NMT models: the shallow and deep sequence-tosequence and the Transformer model as implemented in Marian toolkit. Phrase-based Moses serves as the baseline. The results indicate that the Transformer model outperforms others in the large data setting in a number of automatic metrics and manual evaluation, and it also produces the fewest truncated sentences. Transformer training is however very sensitive to the hyperparameters, so it requires more experimenting. The deep sequence-to-sequence model produced more flawless outputs in the small data setting and it was generally more stable, at the cost of more training iterations.