Maria Zafar
2024
The SETU-DCU Submissions to IWSLT 2024 Low-Resource Speech-to-Text Translation Tasks
Maria Zafar
|
Antonio Castaldo
|
Prashanth Nayak
|
Rejwanul Haque
|
Neha Gajakos
|
Andy Way
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)
Natural Language Processing (NLP) research and development has experienced rapid progression in the recent times due to advances in deep learning. The introduction of pre-trained large language models (LLMs) is at the core of this transformation, significantly enhancing the performance of machine translation (MT) and speech technologies. This development has also led to fundamental changes in modern translation and speech tools and their methodologies. However, there remain challenges when extending this progress to underrepresented dialects and low-resource languages, primarily due to the need for more data. This paper details our submissions to the IWSLT speech translation (ST) tasks. We used the Whisper model for the automatic speech recognition (ASR) component. We then used mBART and NLLB as cascaded systems for utilising their MT capabilities. Our research primarily focused on exploring various dialects of low-resource languages and harnessing existing resources from linguistically related languages. We conducted our experiments for two morphologically diverse language pairs: Irish-to-English and Maltese-to-English. We used BLEU, chrF and COMET for evaluating our MT models.
The SETU-ADAPT Submission for WMT 24 Biomedical Shared Task
Antonio Castaldo
|
Maria Zafar
|
Prashanth Nayak
|
Rejwanul Haque
|
Andy Way
|
Johanna Monti
Proceedings of the Ninth Conference on Machine Translation
This system description paper presents SETU-ADAPT’s submission to the WMT 2024 Biomedical Shared Task, where we participated for the language pairs English-to-French and English-to-German. Our approach focused on fine-tuning Large Language Models, using in-domain and synthetic data, employing different data augmentation and data retrieval strategies. We introduce a novel MT framework, involving three autonomous agents: a Translator Agent, an Evaluator Agent and a Reviewer Agent. We present our findings and report the quality of the outputs.
The SETU-ADAPT Submissions to WMT 2024 Chat Translation Tasks
Maria Zafar
|
Antonio Castaldo
|
Prashanth Nayak
|
Rejwanul Haque
|
Andy Way
Proceedings of the Ninth Conference on Machine Translation
This paper presents the SETU-ADAPT submissions to the WMT24 Chat Translation Task. Large language models (LLM) currently provides the state-of-the-art solutions in many natural language processing (NLP) problems including machine translation (MT). For the WMT24 Chat Translation Task we leveraged LLMs for their MT capabilities. In order to adapt the LLMs for a specific domain of interest, we explored different fine-tuning and prompting strategies. We also employed efficient data retrieval methods to curate the data used for fine-tuning. We carried out experiments for two language pairs: German-to-English and French-to-English. Our MT models were evaluated using three metrics: BLEU, chrF and COMET. In this paper we describes our experiments including training setups, results and findings.
Search
Co-authors
- Antonio Castaldo 3
- Prashanth Nayak 3
- Rejwanul Haque 3
- Andy Way 3
- Neha Gajakos 1
- show all...