Fareed Zaffar
2025
Multitask-Bench: Unveiling and Mitigating Safety Gaps in LLMs Fine-tuning
Essa Jan
|
Nouar Aldahoul
|
Moiz Ali
|
Faizan Ahmad
|
Fareed Zaffar
|
Yasir Zaki
Proceedings of the 31st International Conference on Computational Linguistics
Recent breakthroughs in Large Language Models (LLMs) have led to their adoption across a wide range of tasks, ranging from code generation to machine translation and sentiment analysis, etc. Red teaming/Safety alignment efforts show that fine-tuning models on benign (non-harmful) data could compromise safety. However, it remains unclear to what extent this phenomenon is influenced by different variables, including fine-tuning task, model calibrations, etc. This paper explores the task-wise safety degradation due to fine-tuning on downstream tasks such as summarization, code generation, translation, and classification across various calibration. Our results reveal that: 1) Fine-tuning LLMs for code generation and translation leads to the highest degradation in safety guardrails. 2) LLMs generally have weaker guardrails for translation and classification, with 73-92% of harmful prompts answered, across baseline and other calibrations, falling into one of two concern categories. 3) Current solutions, including guards and safety tuning datasets, lack cross-task robustness. To address these issues, we developed a new multitask safety dataset effectively reducing attack success rates across a range of tasks without compromising the model’s overall helpfulness. Our work underscores the need for generalized alignment measures to ensure safer and more robust models.
2021
Through the Looking Glass: Learning to Attribute Synthetic Text Generated by Language Models
Shaoor Munir
|
Brishna Batool
|
Zubair Shafiq
|
Padmini Srinivasan
|
Fareed Zaffar
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Given the potential misuse of recent advances in synthetic text generation by language models (LMs), it is important to have the capacity to attribute authorship of synthetic text. While stylometric organic (i.e., human written) authorship attribution has been quite successful, it is unclear whether similar approaches can be used to attribute a synthetic text to its source LM. We address this question with the key insight that synthetic texts carry subtle distinguishing marks inherited from their source LM and that these marks can be leveraged by machine learning (ML) algorithms for attribution. We propose and test several ML-based attribution methods. Our best attributor built using a fine-tuned version of XLNet (XLNet-FT) consistently achieves excellent accuracy scores (91% to near perfect 98%) in terms of attributing the parent pre-trained LM behind a synthetic text. Our experiments show promising results across a range of experiments where the synthetic text may be generated using pre-trained LMs, fine-tuned LMs, or by varying text generation parameters.
Search
Fix data
Co-authors
- Faizan Ahmad 1
- Nouar Aldahoul 1
- Moiz Ali 1
- Brishna Batool 1
- Essa Jan 1
- show all...