Mikhail Yurochkin
2024
Aligners: Decoupling LLMs and Alignment
Lilian Ngweta
|
Mayank Agarwal
|
Subha Maity
|
Alex Gittens
|
Yuekai Sun
|
Mikhail Yurochkin
Findings of the Association for Computational Linguistics: EMNLP 2024
Large Language Models (LLMs) need to be aligned with human expectations to ensure their safety and utility in most applications. Alignment is challenging, costly, and needs to be repeated for every LLM and alignment criterion. We propose to decouple LLMs and alignment by training *aligner* models that can be used to align any LLM for a given criteria on an as-needed basis, thus also reducing the potential negative impacts of alignment on performance. Our recipe for training the aligner models solely relies on synthetic data generated with a (prompted) LLM and can be easily adjusted for a variety of alignment criteria. We use the same synthetic data to train *inspectors*, binary miss-alignment classification models to guide a *squad* of multiple aligners. Our empirical results demonstrate consistent improvements when applying aligner squad to various LLMs, including chat-aligned models, across several instruction-following and red-teaming datasets.
2022
Your fairness may vary: Pretrained language model fairness in toxic text classification
Ioana Baldini
|
Dennis Wei
|
Karthikeyan Natesan Ramamurthy
|
Moninder Singh
|
Mikhail Yurochkin
Findings of the Association for Computational Linguistics: ACL 2022
The popularity of pretrained language models in natural language processing systems calls for a careful evaluation of such models in down-stream tasks, which have a higher potential for societal impact. The evaluation of such systems usually focuses on accuracy measures. Our findings in this paper call for attention to be paid to fairness measures as well. Through the analysis of more than a dozen pretrained language models of varying sizes on two toxic text classification tasks (English), we demonstrate that focusing on accuracy measures alone can lead to models with wide variation in fairness characteristics. Specifically, we observe that fairness can vary even more than accuracy with increasing training data size and different random initializations. At the same time, we find that little of the fairness variation is explained by model size, despite claims in the literature. To improve model fairness without retraining, we show that two post-processing methods developed for structured, tabular data can be successfully applied to a range of pretrained language models. Warning: This paper contains samples of offensive text.
Search
Co-authors
- Lilian Ngweta 1
- Mayank Agarwal 1
- Subha Maity 1
- Alex Gittens 1
- Yuekai Sun 1
- show all...