Xingjun Ma


2024

pdf bib
Fake Alignment: Are LLMs Really Aligned Well?
Yixu Wang | Yan Teng | Kexin Huang | Chengqi Lyu | Songyang Zhang | Wenwei Zhang | Xingjun Ma | Yu-Gang Jiang | Yu Qiao | Yingchun Wang
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

The growing awareness of safety concerns in large language models (LLMs) has sparked considerable interest in the evaluation of safety. This study investigates an under-explored issue about the evaluation of LLMs, namely the substantial discrepancy in performance between multiple-choice questions and open-ended questions. Inspired by research on jailbreak attack patterns, we argue this is caused by mismatched generalization. That is, LLM only remembers the answer style for open-ended safety questions, which makes it unable to solve other forms of safety tests. We refer to this phenomenon as fake alignment and construct a comparative benchmark to empirically verify its existence in LLMs. We introduce a Fake alIgNment Evaluation (FINE) framework and two novel metrics——Consistency Score (CS) and Consistent Safety Score (CSS), which jointly assess two complementary forms of evaluation to quantify fake alignment and obtain corrected performance estimation. Applying FINE to 14 widely-used LLMs reveals several models with purported safety are poorly aligned in practice. Subsequently, we found that multiple-choice format data can also be used as high-quality contrast distillation-based fine-tuning data, which can strongly improve the alignment consistency of LLMs with minimal fine-tuning overhead. For data and code, see https://github.com/AIFlames/Fake-Alignment.

2022

pdf bib
Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models
Zhiyuan Zhang | Lingjuan Lyu | Xingjun Ma | Chenguang Wang | Xu Sun
Findings of the Association for Computational Linguistics: EMNLP 2022

Deep Neural Networks (DNNs) are known to be vulnerable to backdoor attacks. In Natural Language Processing (NLP), DNNs are often backdoored during the fine-tuning process of a large-scale Pre-trained Language Model (PLM) with poisoned samples. Although the clean weights of PLMs are readily available, existing methods have ignored this information in defending NLP models against backdoor attacks. In this work, we take the first step to exploit the pre-trained (unfine-tuned) weights to mitigate backdoors in fine-tuned language models. Specifically, we leverage the clean pre-trained weights via two complementary techniques: (1) a two-step Fine-mixing technique, which first mixes the backdoored weights (fine-tuned on poisoned data) with the pre-trained weights, then fine-tunes the mixed weights on a small subset of clean data; (2) an Embedding Purification (E-PUR) technique, which mitigates potential backdoors existing in the word embeddings. We compare Fine-mixing with typical backdoor mitigation methods on three single-sentence sentiment classification tasks and two sentence-pair classification tasks and show that it outperforms the baselines by a considerable margin in all scenarios. We also show that our E-PUR method can benefit existing mitigation methods. Our work establishes a simple but strong baseline defense for secure fine-tuned NLP models against backdoor attacks.

2021

pdf bib
Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial Texts
Xinzhe Li | Ming Liu | Xingjun Ma | Longxiang Gao
Proceedings of the 19th Annual Workshop of the Australasian Language Technology Association

Universal adversarial texts (UATs) refer to short pieces of text units that can largely affect the predictions of NLP models. Recent studies on universal adversarial attacks assume the accessibility of datasets for the task, which is not realistic. We propose two types of Data-Free Adjusted Gradient (DFAG) attacks to show that it is possible to generate effective UATs with only one arbitrary example which could be manually crafted. Based on the proposed DFAG attacks, this paper explores the vulnerability of commonly used NLP models in terms of two factors: network architectures and pre-trained embeddings. Our empirical studies on three text classification datasets reveal that: 1) CNN based models are more extremely vulnerable to UATs while self-attention models show the most robustness, 2) the vulnerability of CNN and LSTM models and robustness of self-attention models could be attributed to whether they rely on training data artifacts for their predictions, and 3) the pre-trained embeddings could expose vulnerability to both universal adversarial attack and the UAT transfer attack.