Sundararajan Srinivasan
2024
SpeechGuard: Exploring the Adversarial Robustness of Multi-modal Large Language Models
Raghuveer Peri
|
Sai Muralidhar Jayanthi
|
Srikanth Ronanki
|
Anshu Bhatia
|
Karel Mundnich
|
Saket Dingliwal
|
Nilaksh Das
|
Zejiang Hou
|
Goeric Huybrechts
|
Srikanth Vishnubhotla
|
Daniel Garcia-Romero
|
Sundararajan Srinivasan
|
Kyu Han
|
Katrin Kirchhoff
Findings of the Association for Computational Linguistics: ACL 2024
Integrated Speech and Large Language Models (SLMs) that can follow speech instructions and generate relevant text responses have gained popularity lately. However, the safety and robustness of these models remains largely unclear. In this work, we investigate the potential vulnerabilities of such instruction-following speech-language models to adversarial attacks and jailbreaking. Specifically, we design algorithms that can generate adversarial examples to jailbreak SLMs in both white-box and black-box attack settings without human involvement. Additionally, we propose countermeasures to thwart such jailbreaking attacks. Our models, trained on dialog data with speech instructions, achieve state-of-the-art performance on spoken question-answering task, scoring over 80% on both safety and helpfulness metrics. Despite safety guardrails, experiments on jailbreaking demonstrate the vulnerability of SLMs to adversarial perturbations and transfer attacks, with average attack success rates of 90% and 10% respectively when evaluated on a dataset of carefully designed harmful questions spanning 12 different toxic categories. However, we demonstrate that our proposed countermeasures reduce the attack success significantly.
2023
End-to-End Single-Channel Speaker-Turn Aware Conversational Speech Translation
Juan Pablo Zuluaga-Gomez
|
Zhaocheng Huang
|
Xing Niu
|
Rohit Paturi
|
Sundararajan Srinivasan
|
Prashant Mathur
|
Brian Thompson
|
Marcello Federico
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Conventional speech-to-text translation (ST) systems are trained on single-speaker utterances, and they may not generalize to real-life scenarios where the audio contains conversations by multiple speakers. In this paper, we tackle single-channel multi-speaker conversational ST with an end-to-end and multi-task training model, named Speaker-Turn Aware Conversational Speech Translation, that combines automatic speech recognition, speech translation and speaker turn detection using special tokens in a serialized labeling format. We run experiments on the Fisher-CALLHOME corpus, which we adapted by merging the two single-speaker channels into one multi-speaker channel, thus representing the more realistic and challenging scenario with multi-speaker turns and cross-talk. Experimental results across single- and multi-speaker conditions and against conventional ST systems, show that our model outperforms the reference systems on the multi-speaker condition, while attaining comparable performance on the single-speaker condition. We release scripts for data processing and model training.
Search
Co-authors
- Juan Pablo Zuluaga-Gomez 1
- Zhaocheng Huang 1
- Xing Niu 1
- Rohit Paturi 1
- Prashant Mathur 1
- show all...