Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique

Tej Deep Pala, Vernon Toh, Rishabh Bhardwaj, Soujanya Poria


Abstract
As large language models (LLMs) are increasingly integrated into real-world applications, ensuring their safety and robustness is critical. Automated red-teaming methods generate adversarial attacks to identify vulnerabilities, but existing approaches often face challenges like slow performance, limited categorical diversity, and high resource demands. We propose Ferret, a novel method that enhances the baseline, Rainbow Teaming by generating multiple adversarial prompt mutations per iteration and ranking them using scoring functions such as reward models, Llama Guard, and LLM-as-a-judge. Ferret achieves a 95% attack success rate (ASR), a 46% improvement over baseline, and reduces time to a 90% ASR by 15.2%. Additionally, it generates transferable adversarial prompts effective on larger LLMs. Our code is available at https://github.com/declare-lab/ferret
Anthology ID:
2025.findings-emnlp.634
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11845–11860
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.634/
DOI:
Bibkey:
Cite (ACL):
Tej Deep Pala, Vernon Toh, Rishabh Bhardwaj, and Soujanya Poria. 2025. Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 11845–11860, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique (Pala et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.634.pdf
Checklist:
 2025.findings-emnlp.634.checklist.pdf