Sara Nabhani
2024
UM IWSLT 2024 Low-Resource Speech Translation: Combining Maltese and North Levantine Arabic
Sara Nabhani
|
Aiden Williams
|
Miftahul Jannat
|
Kate Rebecca Belcher
|
Melanie Galea
|
Anna Taylor
|
Kurt Micallef
|
Claudia Borg
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)
The IWSLT low-resource track encourages innovation in the field of speech translation, particularly in data-scarce conditions. This paper details our submission for the IWSLT 2024 low-resource track shared task for Maltese-English and North Levantine Arabic-English spoken language translation using an unconstrained pipeline approach. Using language models, we improve ASR performance by correcting the produced output. We present a 2 step approach for MT using data from external sources showing improvements over baseline systems. We also explore transliteration as a means to further augment MT data and exploit the cross-lingual similarities between Maltese and Arabic.
Mela at ArAIEval Shared Task: Propagandistic Techniques Detection in Arabic with a Multilingual Approach
Md Abdur Razzaq Riyadh
|
Sara Nabhani
Proceedings of The Second Arabic Natural Language Processing Conference
This paper presents our system submitted for Task 1 of the ArAIEval Shared Task on Unimodal (Text) Propagandistic Technique Detection in Arabic. Task 1 involves identifying all employed propaganda techniques in a given text from a set of possible techniques or detecting that no propaganda technique is present. Additionally, the task requires identifying the specific spans of text where these techniques occur. We explored the capabilities of a multilingual BERT model for this task, focusing on the effectiveness of using outputs from different hidden layers within the model. By fine-tuning the multilingual BERT, we aimed to improve the model’s ability to recognize and locate various propaganda techniques. Our experiments showed that leveraging the hidden layers of the BERT model enhanced detection performance. Our system achieved competitive results, ranking second in the shared task, demonstrating that multilingual BERT models, combined with outputs from hidden layers, can effectively detect and identify spans of propaganda techniques in Arabic text.
Search
Co-authors
- Aiden Williams 1
- Miftahul Jannat 1
- Kate Rebecca Belcher 1
- Melanie Galea 1
- Anna Taylor 1
- show all...