Alternate Preference Optimization for Unlearning Factual Knowledge in Large Language Models

Anmol Reddy Mekala, Vineeth Dorna, Shreya Dubey, Abhishek Lalwani, David Koleczek, Mukund Rungta, Sadid A. Hasan, Elita A.A Lobo


Abstract
Machine unlearning aims to efficiently eliminate the influence of specific training data, known as the forget set, from the model. However, existing unlearning methods for Large Language Models (LLMs) face a critical challenge: they rely solely on negative feedback to suppress responses related to the forget set, which often results in nonsensical or inconsistent outputs, diminishing model utility and posing potential privacy risks. To address this limitation, we propose a novel approach called Alternate Preference Optimization (AltPO), which combines negative feedback with in-domain positive feedback on the forget set. Additionally, we introduce new evaluation metrics to assess the quality of responses related to the forget set. Extensive experiments show that our approach not only enables effective unlearning but also avoids undesirable model behaviors while maintaining overall model performance.
Anthology ID:
2025.coling-main.252
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3732–3752
Language:
URL:
https://aclanthology.org/2025.coling-main.252/
DOI:
Bibkey:
Cite (ACL):
Anmol Reddy Mekala, Vineeth Dorna, Shreya Dubey, Abhishek Lalwani, David Koleczek, Mukund Rungta, Sadid A. Hasan, and Elita A.A Lobo. 2025. Alternate Preference Optimization for Unlearning Factual Knowledge in Large Language Models. In Proceedings of the 31st International Conference on Computational Linguistics, pages 3732–3752, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Alternate Preference Optimization for Unlearning Factual Knowledge in Large Language Models (Mekala et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.252.pdf