Tianyi Yan
2024
Contrastive Instruction Tuning
Tianyi Yan
|
Fei Wang
|
James Y. Huang
|
Wenxuan Zhou
|
Fan Yin
|
Aram Galstyan
|
Wenpeng Yin
|
Muhao Chen
Findings of the Association for Computational Linguistics: ACL 2024
Instruction tuning has been used as a promising approach to improve the performance of large language models (LLMs) on unseen tasks. However, current LLMs exhibit limited robustness to unseen instructions, generating inconsistent outputs when the same instruction is phrased with slightly varied forms or language styles. This behavior indicates LLMs’ lack of robustness to textual variations and generalizability to unseen instructions, potentially leading to trustworthiness issues. Accordingly, we propose Contrastive Instruction Tuning, which maximizes the similarity between the hidden representations of semantically equivalent instruction-instance pairs while minimizing the similarity between semantically different ones. To facilitate this approach, we augment the existing FLAN collection by paraphrasing task instructions. Experiments on the PromptBench benchmark show that CoIN consistently improves LLMs’ robustness to unseen instructions with variations across character, word, sentence, and semantic levels by an average of +2.5% in accuracy.
2023
Robust Natural Language Understanding with Residual Attention Debiasing
Fei Wang
|
James Y. Huang
|
Tianyi Yan
|
Wenxuan Zhou
|
Muhao Chen
Findings of the Association for Computational Linguistics: ACL 2023
Natural language understanding (NLU) models often suffer from unintended dataset biases. Among bias mitigation methods, ensemble-based debiasing methods, especially product-of-experts (PoE), have stood out for their impressive empirical success. However, previous ensemble-based debiasing methods typically apply debiasing on top-level logits without directly addressing biased attention patterns. Attention serves as the main media of feature interaction and aggregation in PLMs and plays a crucial role in providing robust prediction. In this paper, we propose REsidual Attention Debiasing (READ), an end-to-end debiasing method that mitigates unintended biases from attention. Experiments on three NLU benchmarks show that READ significantly improves the OOD performance of BERT-based models, including +12.9% accuracy on HANS, +11.0% accuracy on FEVER-Symmetric, and +2.7% F1 on PAWS. Detailed analyses demonstrate the crucial role of unbiased attention in robust NLU models and that READ effectively mitigates biases in attention.
Search
Co-authors
- Fei Wang 2
- James Y. Huang 2
- Wenxuan Zhou 2
- Muhao Chen 2
- Fan Yin 1
- show all...