Sheza Munir
2024
Deepfake Defense: Constructing and Evaluating a Specialized Urdu Deepfake Audio Dataset
Sheza Munir
|
Wassay Sajjad
|
Mukeet Raza
|
Emaan Abbas
|
Abdul Hameed Azeemi
|
Ihsan Ayyub Qazi
|
Agha Ali Raza
Findings of the Association for Computational Linguistics: ACL 2024
Deepfakes, particularly in the auditory domain, have become a significant threat, necessitating the development of robust countermeasures. This paper addresses the escalating challenges posed by deepfake attacks on Automatic Speaker Verification (ASV) systems. We present a novel Urdu deepfake audio dataset for deepfake detection, focusing on two spoofing attacks – Tacotron and VITS TTS. The dataset construction involves careful consideration of phonemic cover and balance and comparison with existing corpora like PRUS and PronouncUR. Evaluation with AASIST-L model shows EERs of 0.495 and 0.524 for VITS TTS and Tacotron-generated audios, respectively, with variability across speakers. Further, this research implements a detailed human evaluation, incorporating a user study to gauge whether people are able to discern deepfake audios from real (bonafide) audios. The ROC curve analysis shows an area under the curve (AUC) of 0.63, indicating that individuals demonstrate a limited ability to detect deepfakes (approximately 1 in 3 fake audio samples are regarded as real). Our work contributes a valuable resource for training deepfake detection models in low-resource languages like Urdu, addressing the critical gap in existing datasets. The dataset is publicly available at: https://github.com/CSALT-LUMS/urdu-deepfake-dataset.
Search
Co-authors
- Wassay Sajjad 1
- Mukeet Raza 1
- Emaan Abbas 1
- Abdul Hameed Azeemi 1
- Ihsan Ayyub Qazi 1
- show all...