Md Abdur Razzaq Riyadh
Also published as: Md Abdur Razzaq Riyadh
2024
UOM-Constrained IWSLT 2024 Shared Task Submission - Maltese Speech Translation
Kurt Abela
|
Md Abdur Razzaq Riyadh
|
Melanie Galea
|
Alana Busuttil
|
Roman Kovalev
|
Aiden Williams
|
Claudia Borg
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)
This paper presents our IWSLT-2024 shared task submission on the low-resource track. This submission forms part of the constrained setup; implying limited data for training. Following the introduction, this paper consists of a literature review defining previous approaches to speech translation, as well as their application to Maltese, followed by the defined methodology, evaluation and results, and the conclusion. A cascaded submission on the Maltese to English language pair is presented; consisting of a pipeline containing: a DeepSpeech 1 Automatic Speech Recognition (ASR) system, a KenLM model to optimise the transcriptions, and finally an LSTM machine translation model. The submission achieves a 0.5 BLEU score on the overall test set, and the ASR system achieves a word error rate of 97.15%. Our code is made publicly available.
Mela at ArAIEval Shared Task: Propagandistic Techniques Detection in Arabic with a Multilingual Approach
Md Abdur Razzaq Riyadh
|
Sara Nabhani
Proceedings of The Second Arabic Natural Language Processing Conference
This paper presents our system submitted for Task 1 of the ArAIEval Shared Task on Unimodal (Text) Propagandistic Technique Detection in Arabic. Task 1 involves identifying all employed propaganda techniques in a given text from a set of possible techniques or detecting that no propaganda technique is present. Additionally, the task requires identifying the specific spans of text where these techniques occur. We explored the capabilities of a multilingual BERT model for this task, focusing on the effectiveness of using outputs from different hidden layers within the model. By fine-tuning the multilingual BERT, we aimed to improve the model’s ability to recognize and locate various propaganda techniques. Our experiments showed that leveraging the hidden layers of the BERT model enhanced detection performance. Our system achieved competitive results, ranking second in the shared task, demonstrating that multilingual BERT models, combined with outputs from hidden layers, can effectively detect and identify spans of propaganda techniques in Arabic text.
Search
Co-authors
- Kurt Abela 1
- Melanie Galea 1
- Alana Busuttil 1
- Roman Kovalev 1
- Aiden Williams 1
- show all...