2025
pdf
bib
abs
Benchmarking the Performance of Pre-trained LLMs across Urdu NLP Tasks
Munief Hassan Tahir
|
Sana Shams
|
Layba Fiaz
|
Farah Adeeba
|
Sarmad Hussain
Proceedings of the First Workshop on Challenges in Processing South Asian Languages (CHiPSAL 2025)
Large Language Models (LLMs) pre-trained on multilingual data have revolutionized natural language processing research, by transitioning from languages and task specific model pipelines to a single model adapted on a variety of tasks. However majority of existing multilingual NLP benchmarks for LLMs provide evaluation data in only few languages with little linguistic diversity. In addition these benchmarks lack quality assessment against the respective state-of the art models. This study presents an in-depth examination of 7 prominent LLMs: GPT-3.5-turbo, Llama 2-7B-Chat, Llama 3.1-8B, Bloomz 3B, Bloomz 7B1, Ministral-8B and Whisper (Large, medium and small variant) across 17 tasks using 22 datasets, 13.8 hours of speech, in a zero-shot setting, and their performance against state-of-the-art (SOTA) models, has been compared and analyzed. Our experiments show that SOTA models currently outperform encoder-decoder models in majority of Urdu NLP tasks under zero-shot settings. However, comparing Llama 3.1-8B over prior version Llama 2-7B-Chat, we can deduce that with improved language coverage, LLMs can surpass these SOTA models. Our results emphasize that models with fewer parameters but richer language-specific data, like Llama 3.1-8B, often outperform larger models with lower language diversity, such as GPT-3.5, in several tasks.
pdf
bib
abs
Bridging the Bandwidth Gap: A Mixed Band Telephonic Urdu ASR Approach with Domain Adaptation for Banking Applications
Ayesha Khalid
|
Farah Adeeba
|
Najm Ul Sehar
|
Sarmad Hussain
Proceedings of the First Workshop on Challenges in Processing South Asian Languages (CHiPSAL 2025)
The accuracy of Automatic Speech Recognition (ASR) systems is influenced by the quality and context of speech signals, particularly in telephonic environments prone to errors like channel drops and noise, leading to higher Word Error Rates (WER). This paper presents the development of a large vocabulary Urdu ASR system for telephonic speech, based on a corpus of 445 speakers from diverse domains. The corpus, annotated at the sentence level, is used to train and evaluate GMM-HMM and chain Time-Delay Neural Network (TDNN) models on a 10-hour test set. Results show that the TDNN model outperforms GMM-HMM. Mixing narrowband and wideband speech further reduces WER. The test sets are also evaluated for the pre-trained model Whisper for performance comparison. Additionally, system adaptation for the banking domain with a specialized lexicon and language model demonstrates the system’s potential for domain-specific applications.
pdf
bib
abs
Benchmarking Whisper for Low-Resource Speech Recognition: An N-Shot Evaluation on Pashto, Punjabi, and Urdu
Najm Ul Sehar
|
Ayesha Khalid
|
Farah Adeeba
|
Sarmad Hussain
Proceedings of the First Workshop on Challenges in Processing South Asian Languages (CHiPSAL 2025)
Whisper, a large-scale multilingual model, has demonstrated strong performance in speech recognition benchmarks, but its effectiveness on low-resource languages remains under-explored. This paper evaluates Whisper’s performance on Pashto, Punjabi, and Urdu, three underrepresented languages. While Automatic Speech Recognition (ASR) has advanced for widely spoken languages, low-resource languages still face challenges due to limited data. Whisper’s zero-shot performance was benchmarked and then its small variant was fine-tuned to improve transcription accuracy. Significant reductions in Word Error Rate (WER) were achieved through few-shot fine-tuning, which helped the model better handle challenges such as complex phonetic structures, compared to zero-shot performance. This study contributes to improving multilingual ASR for low-resource languages and highlights Whisper’s adaptability and potential for further enhancement.
2014
pdf
bib
abs
The CLE Urdu POS Tagset
Saba Urooj
|
Sarmad Hussain
|
Asad Mustafa
|
Rahila Parveen
|
Farah Adeeba
|
Tafseer Ahmed Khan
|
Miriam Butt
|
Annette Hautli
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
The paper presents a design schema and details of a new Urdu POS tagset. This tagset is designed due to challenges encountered in working with existing tagsets for Urdu. It uses tags that judiciously incorporate information about special morpho-syntactic categories found in Urdu. With respect to the overall naming schema and the basic divisions, the tagset draws on the Penn Treebank and a Common Tagset for Indian Languages. The resulting CLE Urdu POS Tagset consists of 12 major categories with subdivisions, resulting in 32 tags. The tagset has been used to tag 100k words of the CLE Urdu Digest Corpus, giving a tagging accuracy of 96.8%.
2013
pdf
bib
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations
Miriam Butt
|
Sarmad Hussain
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations
2012
pdf
bib
Proceedings of the 10th Workshop on Asian Language Resources
Ruvan Weerasinghe
|
Sarmad Hussain
|
Virach Sornlertlamvanich
|
Rachel Edita O. Roxas
Proceedings of the 10th Workshop on Asian Language Resources
2011
pdf
bib
Proceedings of the 9th Workshop on Asian Language Resources
Rachel Edita O. Roxas
|
Sarmad Hussain
|
Key-Sun Choi
Proceedings of the 9th Workshop on Asian Language Resources
pdf
bib
Experiences in Building Urdu WordNet
Farah Adeeba
|
Sarmad Hussain
Proceedings of the 9th Workshop on Asian Language Resources
2010
pdf
bib
abs
Transliterating Urdu for a Broad-Coverage Urdu/Hindi LFG Grammar
Muhammad Kamran Malik
|
Tafseer Ahmed
|
Sebastian Sulger
|
Tina Bögel
|
Atif Gulzar
|
Ghulam Raza
|
Sarmad Hussain
|
Miriam Butt
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
In this paper, we present a system for transliterating the Arabic-based script of Urdu to a Roman transliteration scheme. The system is integrated into a larger system consisting of a morphology module, implemented via finite state technologies, and a computational LFG grammar of Urdu that was developed with the grammar development platform XLE (Crouch et al. 2008). Our long-term goal is to handle Hindi alongside Urdu; the two languages are very similar with respect to syntax and lexicon and hence, one grammar can be used to cover both languages. However, they are not similar concerning the script -- Hindi is written in Devanagari, while Urdu uses an Arabic-based script. By abstracting away to a common Roman transliteration scheme in the respective transliterators, our system can be enabled to handle both languages in parallel. In this paper, we discuss the pipeline architecture of the Urdu-Roman transliterator, mention several linguistic and orthographic issues and present the integration of the transliterator into the LFG parsing system.
pdf
bib
Urdu Word Segmentation
Nadir Durrani
|
Sarmad Hussain
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
pdf
bib
Proceedings of the Eighth Workshop on Asian Language Resouces
Sarmad Hussain
|
Virach Sornlertlamvanich
|
Hammam Riza
Proceedings of the Eighth Workshop on Asian Language Resouces
pdf
bib
Word Segmentation for Urdu OCR System
Misbah Akram
|
Sarmad Hussain
Proceedings of the Eighth Workshop on Asian Language Resouces
pdf
bib
Dzongkha Word Segmentation
Sithar Norbu
|
Pema Choejey
|
Tenzin Dendup
|
Sarmad Hussain
|
Ahmed Muaz
Proceedings of the Eighth Workshop on Asian Language Resouces
pdf
bib
A hybrid approach to Urdu verb phrase chunking
Wajid Ali
|
Sarmad Hussain
Proceedings of the Eighth Workshop on Asian Language Resouces
2009
pdf
bib
Analysis and Development of Urdu POS Tagged Corpus
Ahmed Muaz
|
Aasim Ali
|
Sarmad Hussain
Proceedings of the 7th Workshop on Asian Language Resources (ALR7)
pdf
bib
Assas-band, an Affix-Exception-List Based Urdu Stemmer
Qurat-ul-Ain Akram
|
Asma Naseer
|
Sarmad Hussain
Proceedings of the 7th Workshop on Asian Language Resources (ALR7)
2008
pdf
bib
Resources for Urdu Language Processing
Sarmad Hussain
Proceedings of the 6th Workshop on Asian Language Resources
2004
pdf
bib
Letter-to-Sound Conversion for Urdu Text-to-Speech System
Sarmad Hussain
Proceedings of the Workshop on Computational Approaches to Arabic Script-based Languages
pdf
bib
Urdu Localization Project
Sarmad Hussain
Proceedings of the Workshop on Computational Approaches to Arabic Script-based Languages