David Koleczek
2025
Alternate Preference Optimization for Unlearning Factual Knowledge in Large Language Models
Anmol Reddy Mekala
|
Vineeth Dorna
|
Shreya Dubey
|
Abhishek Lalwani
|
David Koleczek
|
Mukund Rungta
|
Sadid A. Hasan
|
Elita A.A Lobo
Proceedings of the 31st International Conference on Computational Linguistics
Machine unlearning aims to efficiently eliminate the influence of specific training data, known as the forget set, from the model. However, existing unlearning methods for Large Language Models (LLMs) face a critical challenge: they rely solely on negative feedback to suppress responses related to the forget set, which often results in nonsensical or inconsistent outputs, diminishing model utility and posing potential privacy risks. To address this limitation, we propose a novel approach called Alternate Preference Optimization (AltPO), which combines negative feedback with in-domain positive feedback on the forget set. Additionally, we introduce new evaluation metrics to assess the quality of responses related to the forget set. Extensive experiments show that our approach not only enables effective unlearning but also avoids undesirable model behaviors while maintaining overall model performance.
2022
UMass PCL at SemEval-2022 Task 4: Pre-trained Language Model Ensembles for Detecting Patronizing and Condescending Language
David Koleczek
|
Alexander Scarlatos
|
Preshma Linet Pereira
|
Siddha Makarand Karkare
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
Patronizing and condescending language (PCL) is everywhere, but rarely is the focus on its use by media towards vulnerable communities. Accurately detecting PCL of this form is a difficult task due to limited labeled data and how subtle it can be. In this paper, we describe our system for detecting such language which was submitted to SemEval 2022 Task 4: Patronizing and Condescending Language Detection. Our approach uses an ensemble of pre-trained language models, data augmentation, and optimizing the threshold for detection. Experimental results on the evaluation dataset released by the competition hosts show that our work is reliably able to detect PCL, achieving an F1 score of 55.47% on the binary classification task and a macro F1 score of 36.25% on the fine-grained, multi-label detection task.
Search
Fix data
Co-authors
- Vineeth Dorna 1
- Shreya Dubey 1
- Sadid A. Hasan 1
- Siddha Makarand Karkare 1
- Abhishek Lalwani 1
- show all...