Rao Ma


2024

pdf bib
Muting Whisper: A Universal Acoustic Adversarial Attack on Speech Foundation Models
Vyas Raina | Rao Ma | Charles McGhee | Kate Knill | Mark Gales
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

pdf bib
Investigating the Emergent Audio Classification Ability of ASR Foundation Models
Rao Ma | Adian Liusie | Mark Gales | Kate Knill
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

Text and vision foundation models can perform many tasks in a zero-shot setting, a desirable property that enables these systems to be applied in general and low-resource settings. There has been far less work, however, on the zero-shot abilities of ASR foundation models, with these systems typically fine-tuned to specific tasks or constrained to applications that match their training criterion and data annotation. In this work we investigate the ability of Whisper and MMS, ASR foundation models trained primarily for speech recognition, to perform zero-shot audio classification. We use simple template-based text prompts at the decoder and use the resulting decoding probabilities to generate zero-shot predictions. Without training the model on extra data or adding any new parameters, we demonstrate that Whisper shows promising zero-shot classification performance on a range of 8 audio-classification datasets, outperforming the accuracy of existing state-of-the-art zero-shot baselines by an average of 9%. One important step to unlock the emergent ability is debiasing, where a simple unsupervised reweighting method of the class probabilities yields consistent significant performance gains. We further show that performance increases with model size, implying that as ASR foundation models scale up, they may exhibit improved zero-shot performance.

2020

pdf bib
Unsupervised Dual Paraphrasing for Two-stage Semantic Parsing
Ruisheng Cao | Su Zhu | Chenyu Yang | Chen Liu | Rao Ma | Yanbin Zhao | Lu Chen | Kai Yu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

One daunting problem for semantic parsing is the scarcity of annotation. Aiming to reduce nontrivial human labor, we propose a two-stage semantic parsing framework, where the first stage utilizes an unsupervised paraphrase model to convert an unlabeled natural language utterance into the canonical utterance. The downstream naive semantic parser accepts the intermediate output and returns the target logical form. Furthermore, the entire training process is split into two phases: pre-training and cycle learning. Three tailored self-supervised tasks are introduced throughout training to activate the unsupervised paraphrase model. Experimental results on benchmarks Overnight and GeoGranno demonstrate that our framework is effective and compatible with supervised training.